Running Multiple Full Nodes on one Host


Cassandra is well-known for its ability to handle some of the world’s largest database workloads. It was developed at Facebook and is now used by other web-scale companies like Instagram, Reddit, and Netflix. But Cassandra is also a beast to run, taking considerable resources to function at peak performance.

The cost of operating a Cassandra cluster can be quantified by:

The number of compute resources in the cluster. The amount of storage consumed by the dataset. The network transfer between nodes and ingress/egress for connected clients.

The more resources consumed by your cluster, the more it will cost. That seems obvious, but the challenge comes in minimizing the resources to run Cassandra while maintaining sufficient performance. This is especially challenging when you want to run multiple Cassandra clusters (or rings).

The simplest way to operate multiple Cassandra clusters is to use new physical hosts for each ring. But...

0 0
0 0

For your personal website, you may want to host multiple Node apps that run on different port numbers on the same host. Of course, you can access each application directly by its port number, but this is inconvenient and insecure. A better approach is to use a proxy server that directs traffic to each of your Node apps behind the scenes.

In this post I'll show you how to set up a simple proxy server in 5 minutes. Let's get started!


I run a Node application called Fit Bank on this website. (The site is hosted by Digital Ocean, but you could easily do this with AWS or any other provider that gives you a host with SSH access.) Let's say that Fit Bank runs on port 3456. Until recently, I used a link like this to access it:

// The old, frustrating link to my side project

The port number is annoying to type and also insecure; many corporate networks will block traffic to external ports except for standard ones like 80. I...

0 0

One of the first Cassandra tickets I worked on had me reviewing some code that visualized the node ring. Properly testing the code required that I run a cluster.

But I didn't have access to a cluster. Neither did I feel like creating a virtual cluster by building a VM and cloning it several times. What I wanted was to run several instances of Cassandra on a single machine with multiple interfaces, all pointed at the same compiled code (without multiple svn checkouts).

The Cassandra wiki explains how to tweak Cassandra settings by editing, but doesn't explain what needs to be done to run concurrent instances.

It turned out not to be too difficult. I figured it might be daunting enough to Cassandra noobs (of whom we're seeing more of lately due to some great exposure), that a blog post might be helpful.

This tutorial assumes that you'll want to run multiple instances of Cassandra on code built by ant and not a standalone jar. I am also...

0 0

Ethereum is one of the most important blockchains present today, not only because it represents another cryptocurrency, but also because Ethereum is technically a “world computer” that unifies the processing power of the network’s public nodes. Ethereum’s “world computer”, or the Ethereum virtual machine (EVM), can be used by peers across the network to execute smart contracts.

Ethereum’s platform was launched in 2016, so it is literally still in its infancy. In my opinion, ethereum is currently undervalued and I won’t be surprised if Ethereum’s price surpassed that of bitcoin during the upcoming few years.

Throughout this article, I will present you with an easy-to-follow guide to help you set up an ethereum node.

Geth and Eth:

Before setting up your Ethereum node, there are two important pieces of software that you have to know about; Geth and Eth.

Geth and Eth are two separate command line tools that can run a full Ethereum , public or private,...

0 0

This post builds upon the earlier article (Creating a simple Cluster on a single LINUX host) which explained how to install and run a Cluster where all of the nodes run on the same physical host.

The single host solution is not great for a real deployment – MySQL Cluster is designed to provide a High Availability (HA) solution by synchronously replicating data between data nodes – if all of the data nodes run on a single host that that machine is a single point of failure.

MySQL Cluster running accross 2 hosts

This article demonstrates how to split the nodes between hosts; the configuration will still be fairly simple, using just 2 machines but it should be obvious how to extend it to more.

This new Cluster will be split across 2 machines, with all functions duplicated on each machine as shown in the diagram.

Downloading and installing

In the following example, host “ws1” has the IP Address and “ws2” has...

0 0

Requirements to Follow This Tutorial

You need to have nginx and Node.js installed, and there are already well written tutorials about these topics on DigitalOcean:

How to install nginx and How to install Node.js.

In addition, you should already own a domain, in order to map a running Node.js service to a domain name, instead of navigating to http://[your-vps-ip]:[port].

Running Your Node.js Application with Forever

Forever is a simple command line tool for ensuring that a Node.js application runs continuously (i.e. forever). This means if your app encounters an error and crashes, forever will take care of this issue and restart it for you.

Simply install forever globally and forever can be used in a matter of seconds:

npm install forever -g

To start a script with forever you need to follow these steps:

Navigate to your Node.js application:

cd /path/to/your/node/app/

and run the server/main JavaScript file with...

0 0

Table of contents:

What pre-requisites are necessary for running an Open MPI job? What ABI guarantees does Open MPI provide? Do I need a common filesystem on all my nodes? How do I add Open MPI to my PATH and LD_LIBRARY_PATH? What if I can't modify my PATH and/or LD_LIBRARY_PATH? How do I launch Open MPI parallel jobs? How do I run a simple SPMD MPI job? How do I run an MPMD MPI job? How do I specify the hosts on which my MPI job runs? I can run ompi_info and launch MPI jobs on a single host, but not across multiple hosts. Why? How can I diagnose problems when running across multiple hosts? When I build Open MPI with the Intel compilers, I get warnings about "orted" or my MPI application not finding What do I do? When I build Open MPI with the PGI compilers, I get warnings about "orted" or my MPI application not finding What do I do? When I build Open MPI with the Pathscale compilers, I get warnings about "orted" or my MPI application not finding
0 0

That's all very nice aticles, but it doesn't help me now.

Let's make this an official one....

I use RH9 with Apache updated untill today. I already had one website, a php driven one with mysql integration. I do not run my own DNS, and I prefer not to, even. I simply applied for registration, gave them my IP, and it worked.

So, my website was fine, and a friend asked me if he could run his website on my server. That's when I posted this topic. I read the articles and it did not seem rocket science to me. I told him my IP and he transferred the name to my public IP with his registration service.

So, to make it easy for him, I made him a user account, and I started the http server config tool that came with redhat. I would have preferred to edit httpd.conf myself, but things got weird when I tried to vi httpd.conf, even when the server was stopped. It seems so contain no information, while I'm sure something should be there. Anyway.
I stopped the...

0 0

Configuring a Riak cluster involves instructing each node to listen on a non-local interface, i.e. not, and then joining all of the nodes together to participate in the cluster.

Most configuration changes will be applied to the configuration file located in your rel/riak/etc directory (if you compiled from source) or /etc (if you used a binary install of Riak).

The commands below presume that you are running from a source install, but if you have installed Riak with a binary install, you can substitute the usage of bin/riak with sudo /usr/sbin/riak and bin/riak-admin with sudo /usr/sbin/riak-admin. The riak and riak-admin scripts are located in the /bin directory of your installation.

Note on changing the name value

If possible, you should avoid starting Riak prior to editing the name of a node. This setting corresponds to the nodename parameter in the riak.conf file if you are using the newer configuration system, and to the -name parameter...

0 0

If you want to run multiple websites on a single AWS host (or really, any other Apache 2+ web server that requires manual configuration), this guide is for you!

I’ll walk you through the process of setting up your AWS server to support multiple site, domains, or subdomains using Apache’s Virtual Host (vhost) features. It’s this feature that allows Apache to detect the host/domain name being requested so that it can serve up the appropriate site from a list of options.

Don’t worry, it’s actually not as complicated as it sounds… and I’ll leave out all the wherefores and other technical mumbo jumbo, since that’s available elsewhere. You’re here for clarity.


Generally speaking, this guide assumes that you are using an Amazon Linux LAMP stack on a single EC2 instance, and that you are using Amazon’s Route 53 for your DNS service.

If you followed my earlier blog post, The Ultimate Guide to WordPress on AWS EC2, then you already meet all...

0 0

Thank you for your answer!

However, I am not sure whether the amount of RAM ist the problem in my case.

I have a toal of 32GB of RAM on my system. Following to the vmx getting started guide, I configured 2GB of RAM for each vcp and 6GB of RAM for each vfp.

I can start up to three instances of vmx without any errors.

Afterwards, the"free -m"-command gives me this:

total used free shared buffers cached Mem: 32164 19651 12512 1 23 3410 -/+ buffers/cache: 16217 15946 Swap: 0 0 0

So, I' assuming (even though I have already started 3 vmx-instances with a total configuration of 24GB of RAM according to the yaml-files), that I should still have about 12GB of RAM left for the startup of a fourth vmx-instance (maybe due to overcommitment?).

Once I start the fourth instance of vmx I get the "file_ram_alloc: can't mmap RAM pages: Cannot allocate memory"-error.

At that moment in...

0 0
0 0

This article is about setting up a DataStax Enterprise cluster running in a single host.

There are a variety of reasons why you might want to run a DataStax Enterprise cluster inside a single host. For instance, your server vendor talked you into buying this vertical-scale machine but Apache Cassandra™ can't effectively use all the resources available. Or your developers need to test your app as they develop it, and they'd rather test it locally.

Whatever the reason, you'll learn how to set the cluster up from the ground up.

Multi-JVM Multi-Node HOWTO

The goal is to have a dense node: a single box running multiple DataStax Enterprise database only nodes in a single cluster.

The DataStax Enterprise cluster that we build in this blog post will consist of:

3 DataStax Enterprise nodes running an Apache Cassandra™ workload. A simple configuration without internode encryption Multiple interfaces (all virtual in this example). Each node will bind...
0 0

PDSH is a very smart little tool that enables you to issue the same command on multiple hosts at once, and see the output. It is a high-performance and parallel remote shell utility. It can run multiple remote commands in parallel and uses a "sliding window" (or fanout) of threads to conserve resources on the initiating host while allowing some connections to time out. In this article we explain how to install pdsh tool and show few examples.

How to Install PDSH

Installing pdsh is really simple, please follow blow steps

1) Download latest version of PDSH from google code website.

2) Extract and Decompress a pdsh file

[shaha@oc8535558703 Downloads]$ bzip2 -dc pdsh-2.26.tar.bz2 | tar xvf -

3) Now install pdsh with below commands

[shaha@oc8535558703 Downloads]$ cd...
0 0

This is running on Ubuntu 13.10

Here I would like to use Sails.js framework (version 0.9.13) to create node.js app.

1. Install node.js on Ubuntu

2. Create 2 apps

Install Sails.js globally

Create projects

Edit the home page for both projects (to differentiate them)



3. Change the environment to production



In order to make the app run on background, we need forever

Now start the app using forever

Let’s open up your browser, and type localhost:8081… Oops… it doesn’t work like expected

Let’s check the log

See the content of the file

Now, the error tell us that we need to install Sails.js locally

Repeat the same thing on project2

Now open up the browser, it should show up the content.

4. Bind different domain name to different...

0 0

Here's how to get mutltiple sites working on localhost using Windows XP. NB: This post was originally written for Drupal 4.6.x but has been updated for 6.x so some of the comments below are now obsolete.


This page assumes that you have PHP, Apache, MySQL and Drupal all installed and working and have a MySQL admin tool available; the free community edition of SQLyog will do nicely, or you can use phpMyAdmin if you prefer. Again, I'll assume you've got one of these installed and connected to your database server. I'll use SQLyog here.

If you don't yet have PHP, Apache and MySQL installed then XAMPP will help! See also

Finally, I'll assume you're starting with a working D6.x installation, have followed the installation instructions and got the default site up and running. If not, start here

Multi-site setup

OK, to set up multi-sites there are four stages to...

0 0

The cluster administrator can run commands remotely on one or more nodes by doing one of the following:

Using the Compute Cluster Administrator snap-in, a cluster administrator can issue a command-line action on one or more selected compute nodes. The command is immediately run through the Job Scheduler on a priority basis. The command can be any valid command-line argument. The credentials of the user submitting the command-line action is used to run the action.

A cluster administrator can use remote command execution to run scripts and programs on compute nodes. This command is very useful when running commands across multiple nodes. You can run commands on paused nodes.

Using this feature is the same as running the clusrun command.

You can place a script on a shared folder on the head node and then remotely run the script.

To run a command on a node

From the Compute Cluster Administrator, navigate to the Node Management tile by...

0 0

Although Hadoop is designed and developed for distributed computing it can be run on a single node in pseudo distributed mode and with multiple data node on single machine . Developers often run multiple data nodes on single node to develop and test distributed features,data node behavior, Name node interaction with data node and for other reasons.

If you want to feel Hadoop’s distributed data node – name node working and you have only one machine then you can run multiple data nodes on single machine. You can see how Name node stores it’s metadata , fsimage,edits , fstime and how data node stores data blocks on local file system.


To start multiple data nodes on a single node first download / build hadoop binary.

Download hadoop binary or build hadoop binary from hadoop source. Prepare hadoop configuration to run on single node (Change Hadoop default tmp dir location from /tmp to some other reliable location) Add following script to the...
0 0

Set up your droplet

First you will need to setup an SSH Key

Click on the SSH Keys tab, then click Add SSH Key

Digital Ocean guide

Create the key in OpenSSH in Mac OSX or Linux, or using Putty for Windows.

Copy the ssh key and add it into Digital Ocean and give it a name.

Make a backup of your SSH key on your external hard drive etc...

Now click on the Create Droplet tab and fill out the fields for your droplet

Select your Droplet Hostname, Select your size, $5 or $10 a month plans are probably best if you are just using this for development projects.

Select you region, and distribution, I went with Ubuntu and New York.

On the Image/Applications tab select any applications you want setup by default, like node.

Now click Create Droplet to add your droplet

Connect to your Droplet

Digital Ocean guide

Connect via SSH from Terminal as follows, and if your SSH key is setup on your PC you...

0 0
Running Programs With the mpirun Command

This chapter describes the general syntax of the mpirun command and lists the command’s options. This chapter also shows some of the tasks you can perform with the mpirun command. It contains the following sections:

About the mpirun Command

The mpirun command controls several aspects of program execution in Open MPI. mpirun uses the Open Run-Time Environment (ORTE) to launch jobs. If you are running under distributed resource manager software, such as Sun Grid Engine or PBS, ORTE launches the resource manager for you.

If you are using rsh/ssh instead of a resource manager, you must use a hostfile or host list to identify the hosts on which the program will be run. When you issue the mpirun command, you specify the name of the hostfile or host list on the command line; otherwise, mpirun executes all the copies of the program on the local host, in round-robin sequence by CPU slot. For more information about...

0 0

First things first, if you bought your domain elsewhere then you first need to point that domain to your server. You basically have two options here

installing, setting up and running a DNS on your VPS and pointing the DNS (from the control panel where you bought a domain) records to your VPS setting your DNS Zone file A record (in the control panel where you bought a domain) to the VPS IP this posts explain what are the pros and cons

Now, if you do that for multiple domains they will all point to your server’s IP and show the same thing, essentially the thing which is running on port 80 (and that, in our main Nginx installation, will be a default Nginx welcome screen). If you have multiple Node.js applications, which are running on different ports, and you want to pinpoint the domains to that particular applications, then this is where the Nginx comes in so that it takes the requests for each domain and routes it to an appropriate port where the appropriate Node.js...

0 0

Some folks on our team have been working on making node.js work awesomely on Windows. There's a few questions you might have.

First, what's node.js?

If you're not familiar with node.js, it's a new web programming toolkit that everyone's talking about. It's the one that makes you feel not hip if you don't know what it is. Like Ruby on Rails was a few years back. Folks called it "Node" and it's basically server-side JavaScript. The idea is that if you are doing a bunch of JavaScript on the client and you do JavaScript all day, why not do some JavaScript on the server also. One less thing to learn, I suppose.

If you are an ASP.NET programmer, you can think of node.js as being like an IHttpHandler written in JavaScript. For now, it's pretty low-level. It's NOT an HttpHandler, but I'm using an analogy here, OK? Here's a lovely article by Brett McLaughlin that goes into more detail about Node.js and what it is. His subtitle is "Node isn't always the solution, but...

0 0

Parallel Jobs

Shared-Memory Multiprocessor Parallel Execution

Gaussian defaults to execution on only a single processor. If your computer system has multiple processors/cores, and parallel processing is supported in your version of Gaussian, you may the specific CPUs on which to run with the %CPU link 0 command. For example, the following specifies that the program should run on the first 5 cores of a hexacore system (reserving one core for other use):


The node list can also be specified as a range (e.g., 0-5). Ranges can also be followed by a suffix of the form /n, which says to use every nth processor in the range (e.g., /2 specifies every second processor/core).

The older %NProcShared link 0 command can be used to specify the total number of processors on which to execute (leaving the selection of processors to the operating system). Clearly, the number of processors requested should not exceed the number of processors available, or a...

0 0

In this tutorial I will describe the required steps for setting up a distributed, multi-node Apache Hadoop cluster backed by the Hadoop Distributed File System (HDFS), running on Ubuntu Linux.

Hadoop is a framework written in Java for running applications on large clusters of commodity hardware and incorporates features similar to those of the Google File System (GFS) and of the MapReduce computing paradigm. Hadoop’s HDFS is a highly fault-tolerant distributed file system and, like Hadoop in general, designed to be deployed on low-cost hardware. It provides high throughput access to

In a previous tutorial, I described how to setup up a Hadoop single-node cluster on an Ubuntu box. The main goal of this tutorial is to get a more sophisticated Hadoop installation up and running, namely building a multi-node cluster using two Ubuntu boxes.

This tutorial has been tested with the following software versions:

Ubuntu Linux 10.04 LTS (deprecated: 8.10 LTS, 8.04,...
0 0