close x

Category Archives: Cloud Computing

How to work with storage in SoftLayer

Softlayer offers us different types of storage:

        1. Local
        2. SAN
        3. SATA II
        4. Portable storage
        5. NAS
        6. Lockbox
        7. iSCSI
        8. Object Storage
        9. Evault
        10. QuantaStor

1. Local storage is the one where you can run the CCI (Cloud Computing Instance), it is the storage hosted in the physical node, built over a RAID 10. Technology: Switching to Netapp

2. SAN (Storage Area Network), is the storage outside the physical node, this way if the physical node goes down, we can recover/provision/run the vm (virtual machine) in another physical node. The performance is a little bit lower than the local storage. Technology: Isilon

3. SATA II, the hard drives used for the storage on the Bare Metal machines (physical servers), not built on RAID. Just 1 drive available for the hourly option, possibility to upgrade to 1 more drive in the monthly option.

4. Portable storage, is the storage that is attached to the CCI which is not the primary disk hosting the root file system, it means that we can attach disks either local or SAN to the CCI, and later we can attach/detach and move between the CCIs

5. NAS (Network Attached Storage), it is possible to create a NAS space in order to share storage space between different machines. Technology: Equalogics

6. Lockbox, is a NAS space of a maximum of 1Gb, just available in the monthly rate option.

7. iSCSI (internet Small Computer System Interface), it is possible to create a iSCSI space in order to attach to a machine, and attach/detach between different machines. It is also possible to share between different servers -> “Connect multiple servers to a single iSCSI LUN”. Technology: Dell Logics

8. Evault, is a space attached to a CCI where is possible to back up the CCI, not possible to back up a different CCI where is not attached. It is just available for the monthly rate option. Technology: Dell Logics

9. Object Storage, space for unstructured data that can be accessed from everywhere using a connector or the API. Technology: OpenStack Swift, on top a bunch of bare metal servers.

10. Quantastor, dedicated server with mass storage and QuantaStor software on top, protocols SAN and NAS. Technology: Baremetal servers, it is just a software solution on top of the server.

Storage options for the CCI:

blog_1

Which means that the type of storage that we will choose for the root partition (1st disk) will determine the rest of disks. If we choose SAN we can choose up to 5 disks with different capacities from 40 GB to 2000 GB (first one options are 25 GB or 100 GB), if we choose local we can choose up to 2 disks: first one options are 25 GB or 100 GB, and second disk up to 300GB.

Here we are a table with the different storage services from where we can get access, and size:

 Blog_2

 

You can find more info in www.softlayer.com

Cloud Computing0 comments

How to deploy apps in the cloud.

This post it’s just a little idea about how to move traditional applications to a cloud environment. The steps, ideas or whatever are totally subjective and based in my own experience, and repeat again…IT’S JUST A LITTLE OVERVIEW OF THE BIG PICTURE.

1.- Traditional APP

A traditional app it’s divided in three layers: application (presentation), business and data. This, in a network topology, it’s translated to a something like that (in a very simple way to see it):

video_on_demand

figura11

Features:

    • Normally the “app layer” is a web server and the data stored are static (no vary).
    • The database configuration is always the same, it doesn’t vary.
    • The only data, that vary, are the ones stored in the database.

This type of topologies can be seen as clusters or grids (a draft of a kind of “grid” infrastructure it’s shown below):

video_on_demand-1

 

In this infrastructures, the frontend takes the petitions and send to the nodes, where they are processed. But when the load vary this kind of infrastructures have some problems:

  • Investment: they are very expensive infrastructure, they are built by very expensive hardware (normally).
  • Peaks: These infrastructures have limits, what happen when the limit is reached?. And when it’s not reached, the whole infrastructure has to be maintained as if it was being used by a 100%
  • Maintenance: when these infrastructures are big, they need to be in the right place, the datacenters or rooms conditioned for that(in the simplest cases), maintain a datacenter cost a lot of money (electricity, security, monitoring,…)
  • HW failure: If the hardware fails, how many time takes to replace?, is it critical time?.

We are going to take a exact example…let’s talk about a blog, implemented with WordPress. WordPress is composed by the two layers: APP and DataBase. The configuration of the DataBase will be always the same but the data stored in the DataBase not. The app files will be always the same, the files are not going to vary.

figura2

 

So…how can we deploy this “traditional” app in the cloud?

 

2.- Why should I move my apps to the cloud?.

This question it’s answered with these points:

  • HW: The cloud doesn’t need you to invest on HW, the cloud provide the HW (as much as you want).
  • Pay as you go: If you have your app in the cloud, you JUST pay for teh resources that the app consume, if you need a machine just 1hour, you just pay one machine 1 hour, if you need 9999 three days per week, you just pay 9999 machines three days per week.
  • No maintenance is needed: In the price that we talked in the last point is included electricity, upgrades,… all the costs derivated from a datacenter. You don need to worry about these things.
  • Peak responses: The cloud can support all kind the peaks (if it’s correctly configured). As long as you can provision as much instances as you want, you can support these peaks. The cloud can scale out automatic (just some providers).

Let me show the elasticity of the cloud with an example:

elasticidad-en-cloud

 

What that’s figure means:

  • 1 server on the cloud at 40% of cpu (cpu is choosen as a performance monitor, but we can choose whatever. And the numbers are shown as example).
  • When the server reach the 80%, the infrastructure grows automatic, and another server is attached to the load balancer.
  • When the performance of this monitor drops to the 20%, the server is shut down, and we don need to pay for it.

That’s the elasticity in cloud, in three simple steps. Personally for me is the key of cloud computing.

There are one more element that is very important to build apps on cloud, it is the storage. When I’m talking about this storage, I’m not talking about a hard drive which can be linked to a machine, I’m talking about some space unlimited where I can store all the files that the app store (not the files of the app, I mean the files that the users of the app can upload, this content is dynamic content), this storage has to provide an API to embed this features inside the code of the app. We are going to call this “dynamic storage” for difference from the other kind of storage that providers offer.

3.- An example of infrastructure in the cloud.

 

Look at the figure:

figura3

Let see the way it should work:

  • We have three images created and modified by us with static content, these images are app, ddbb and ddbb_backup. The content of these images never vary.
  • At the begining we just have 2 load balancer: 1 load balancer for the app with 1 image of the app, and 1 load balancer with 1 ddbb image and ddbb_backup image. The database backup image is syncronized with the database and depends our backup policy we can make a backup of the database without stopping the database engine.
  • Each database machine has a persistent storage attached, these storage is like a hard drive, we use it because the data storaged never can be lost, so if there are some problem in the instance and shut down, the data stored in this persistent storage are saved. We store there the database’s files.
  • There are a “dynamic storage” for save the files generated from the app by the users.
  • The infrastructure is totally elastic, it start with 1 server for the app and another for the database, and when some of them reach the 80% of cpu usage, the infrastructure provision another server automatically for helping with the load.

This is just a little overview about deploy app in the cloud, and IT’S JUST MY WAY TO DO IT. I’m sure there are better ways.

 

 

 

Cloud Computing0 comments

How to create a High Availability load balancer on SmartCloud Enterprise (1/2)

Hello guys,

The purpose of this post is create a high availability load balancer for http traffic (on Red Hat)), it should be the same proccess with a few changes for other types of traffic. The following image is the structure of the

balanceador-267x300

 

We should follow all the steps in both servers. The only difference between them is in conf file of Keepalived where we must set backup or master.

1.- Install HAproxy and Keepalive.

rpm -ivh http://ftp.astral.ro/mirrors/fedora/pub/epel/6/x86_64/epel-release-6-7.noarch.rpm
yum -y install haproxy keepalived
2.- Configure Keepalived

mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
nano /etc/keepalived/keepalived.conf
vrrp_script chk_haproxy { # Requires keepalived-1.1.13
script “killall -0 haproxy” # cheaper than pidof
interval 2 # check every 2 seconds
weight 2 # add 2 points of prio if OK
} vrrp_instance VI_1 {
interface eth0
state MASTER
virtual_router_id 51
priority 100 # 101 on master, 100 on backup
virtual_ipaddress {
ip_virtual #change for the virtual IP
}
track_script {
chk_haproxy
}
}
3.-Now we need to configure the system to allow HAProxy to access shared virtual IP addresses. First make a backup of the sysctl.conf file:

nano /etc/sysctl.conf
net.ipv4.ip_nonlocal_bind = 1
4.- Let’s check we are doing well:

service keepalived start
ip addr sh eth0
eth0: <BROADCAST,MULTICAST,UP,10000> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff inet xx.xx.xx.xx/xx brd xx.xx.xx.xx scope global eth0 inet xx.xx.xx.xx/xx scope global eth0 inet6 xxxx::xxx:xxxx:xxx:xxxx/xx scope link valid_lft forever preferred_lft forever
5.- Configure HAproxy.

mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak
nano /etc/haproxy/haproxy.cfg

global
log www.notesfromchechu.com local0
log www.notesfromchechu.com local1 notice
chroot /var/lib/haproxy

pidfile /var/run/haproxy.pid

maxconn 4096

user haproxy

group haproxy

daemon

stats socket /var/lib/haproxy/stats

defaults

mode http

log global

option httplog

option dontlognull

option http-server-close

option forwardfor except 127.0.0.0/8

option redispatch

retries 3

timeout http-request 10s

timeout queue 1m

timeout connect 10s

timeout client 1m

timeout server 1m

timeout http-keep-alive 10s

timeout check 10s

maxconn 3000

listen smartcloudtest.com xx.xx.xx.xx:80

mode http

stats enable

stats auth user:password #We should set an user and password in order to see the stats

balance roundrobin

cookie JSESSIONID prefix

option httpclose

option forwardfor

option httpchk HEAD /check.txt HTTP/1.0 #We should create this field in the Rootdirectory of the web server in each web server.

server webA xx.xx.xx.xx:80 cookie A check

server webB xx.xx.xx.xx:80 cookie B check
6.- Set up the start-up

chkconfig haproxy on
chkconfig keepalived on
service haproxy start
7.- Modify webservers conf. Comment the LogFormat line in httpd.conf and add the new one.

nano /usr/local/apache2/conf/httpd.conf
#LogFormat “%h %l %u %t \”%r\” %>s %b \”%{Referer}i\” \”%{User-Agent}i\”" combined LogFormat “%{X-Forwarded-For}i %l %u %t \”%r\” %>s %b \”%{Referer}i\” \”%{User-Agent}i\”" combined
Comment all the CustomLog lines and the following lines, in the virtual host definition:

SetEnvIf Request_URI “^/check\.txt$” dontlog CustomLog /var/log/apache2/access.log combined env=!dontlog
Create in the rootdirectory the file check.txt

touch /usr/local/apache2/htdocs/check.txt
service httpd restart

I hope it will be usefull for you.

 

Source:

http://www.howtoforge.com/haproxy_loadbalancer_debian_etch

http://aaronwalrath.wordpress.com/2011/06/28/configure-haproxy-and-keepalived-for-load-balancing-and-reverse-proxy-on-red-hatscientificcentos-linux-56/

 

 

 

Cloud Computing0 comments

How to create a High Availability load balancer on SmartCloud Enterprise (2/2)

Hey guys,

I will try to show you, in this second part of the post, the way that you can automate the configuration of the image, I mean, do I need everytime I have to deploy the service ( in this case load balancing), do the following:

- provision the image from my private catalogue.

- wait for the image to be ready

- configure each service, each conf file…

- restart the services…

The answer is NO!

One of the greatest point of the SmartCloud Enterprise is the assets manager. You can modify the firewall in the hypervisor level, and automate some actions with starting scripts that can take info in the provisioning time.

I will show you where to find the asset manager:

Load_xml_11

We go to “My Dashboard”, and search the image that we want to modify from our private catalogue. When we choose the image, we can click on the “pen” in order to edit the configuration of the image:

Load_xml2-1024x532

We can see the file manager for the image conf files:

Load_xml3-1024x529

 

And here it is the parameters.xml file, in this file we can modify the firewall in the hypervisor level and take some info from the provisioning proccess. The default parameters.xml file looks like this:

<parameters xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance”xsi:noNamespaceSchemaLocation=”platform:/resource/com.ibm.ccl.devcloud.client/schema/parameters.xsd”>
<firewall>
<rule>
<source>0.0.0.0/0</source>
<minport>1</minport>
<maxport>65535</maxport>
</rule>
</firewall>
</parameters>

 

 

 

If we want to modify the firewall from the hypervisor level, we should put this lines:

<firewall>

<rule>

<source>0.0.0.0/0</source>

<minport>80</minport>

<maxport>80</maxport>

</rule>

<rule>

<source>170.224.196.xx</source>

<minport>22</minport>

<maxport>22</maxport>

</rule>

<rule>

<source>170.224.196.xx</source>

<minport>5666</minport>

<maxport>5666</maxport>

</rule>

</firewall>

 

What this lines do is:

The machine with IP-> 170.224.196.xx can access to the port 22

The machine with IP->170.224.196.xx can access to the port 5666

Everyone can access to the port 80

The port 22 is opened for managing the image, the port 5666 for monitoring, and the port 80 because it is a web load balancer. We should adjust the iptables firewall in the operating system as well, but this will come in other post.

Now, if we can get some info in the moment of provisioning the image we need to add the following lines (in this case I needed get some info about the load balancing service):

<field name=”role” label=”ROLE 100(backup) 101(master)” type=”number”>

<values>

<value>101</value>

</values>

</field>

<field name=”ip” label=”IP virtual” type=”number”>

<values>

<value></value>

</values>

</field>

<field name=”ip_1″ label=”IP server 1″ type=”number”>

<values>

<value></value>

</values>

</field>

<field name=”ip_2″ label=”IP server 2″ type=”number”>

<values>

<value></value>

</values>

</field> <field name=”ip_3″ label=”IP server 3″ type=”number”>

<values>

<value></value>

</values>

</field>

 

With this conf I get the role of the Load BAlancer, the Virtual IP, and as much as I want IP’s to balance. It looks like this (once all the process is finished):

Step2_SCE

Ok, now we have solved the variables that we need to configure the load balancer, the next step it is create the scripts. These scripts have to be saved in the folder “activation_scripts”, maybe you have to create this folder. The one that is mandatory is the “cloud-startupX.sh”, where the X means the run level of the image. This script looks like:

#!/bin/sh
### BEGIN INIT INFO

# chkconfig 3 90 20# description: cloud-startup3.sh

# processname: cloud-startup3.sh

# Provides: cloud-startup3.sh

# Required-Start:# Should-Start:

# Required-Stop:

# Should-Stop:# Default-Start: 3

# Default-Stop:

# Short-Description: Cloud startup

# Description: Extract and set user password

### END INIT INFO

case “$1″ in

start)

echo “== Cloud Starting”

if [ ! -e /etc/cloud/idcuser_pw_randomized ]; then

echo “Randomizing idcuser password”

echo idcuser:`< /dev/urandom tr -dc _A-Z-a-z-0-9 |head -c16` | /usr/sbin/chpasswd

touch /etc/cloud/idcuser_pw_randomized

fi

/sbin/restorecon -R -v /home/idcuser/.ssh

if [ ! -e /etc/cloud/parameters.xml.done ]; then

sh /etc/cloud/register.sh

cp /etc/cloud/parameters.xml /etc/cloud/parameters.xml.done

fi ;;

stop)

echo “== Cloud Stopping”

;;

*)

echo “Usage: $0 {start|stop}”

exit 1

;;esac

 

 

This script has to has this look, the only think you should change is:

if [ ! -e /etc/cloud/parameters.xml.done ]; then

sh /etc/cloud/register.sh

cp /etc/cloud/parameters.xml /etc/cloud/parameters.xml.done

fi ;;

 

These lines are the scripts that I want to execute at the start of the image, and for checking that these lines are just executed in the first start of the image, I saved the parameters.xml with another name to use later as a condition.

 

The scripts of configuration of the image have this look:

register.sh

#!/bin/bash

NUM=1

IP=1

LETRA=(A B C D E F G H I J K)

IP_virtual=$(perl /usr/bin/extract-parameters.pl ip /etc/cloud/parameters.xml)

sed -i “s/IP_VIRTUAL/$IP_virtual/g” /etc/haproxy/haproxy.cfg

sed -i “s/IP_VIRTUAL/$IP_virtual/g” /etc/keepalived/keepalived.conf

ROLE=$(perl /usr/bin/extract-parameters.pl role /etc/cloud/parameters.xml)

sed -i “s/ROLE/$ROLE/g” /etc/keepalived/keepalived.conf

sh /etc/cloud/add_server.sh $IP_virtual

until [ $IP = 0 ]; do

IP=$(perl /usr/bin/extract-parameters.pl ip_$NUM /etc/cloud/parameters.xml)

if [ $IP != 0 ]; then

echo “server server_$NUM $IP:80 cookie ${LETRA[$NUM-1]} check” >> /etc/haproxy/haproxy.cfg

let NUM=$NUM+1

let LETRA=$LETRA+1

fi

done

 

Basically what this script does, it is taking the variables from the parameters.xml and putting in the conf files. I use words as a reference for sed, this way sed can search in the conf file saved in the image and replace it with the new value.

Once we have this, we have to tell the system what are the scripts that we can upload, in order to get that, we create the file scripts.txt inside the folder “activation scripts:

scripts.txt

cloud-startup3.sh=/etc/init.d/cloud-startup3.sh
register.sh=/etc/cloud/register.sh

And the process is done, now we just have to click on update and provision our new image.

 

Enjoy!!!

 

 

 

Cloud Computing0 comments

HowTo upload your own images to the SmartCloud Enterprise. UBUNTU Server

Hey guys,

It is too easy to save and run our images on the cloud. IBM gives us the possibility of uploading our own images, how?.

Some requirements from the source VM:

- First partition /boot, second partition /

- No LVM volumes

- Ext 3 filesystem

- Root partition smaller than 60Gb

1.- The first thing to do is preparing the Virtual Image:

- Create the idcuser and add it to the sudoers file.

adduser idcuser
Create the .ssh folder and the file authorized_keys:

mkdir /home/idcuser/.ssh
touch /home/idcuser/.ssh/authorized_keys
nano /etc/sudoers
and add the following line:

idcuser ALL=(ALL) ALL
- Install SSH

apt-get install ssh
- Modify the ssh config file, commenting all lines starting by “Hostkey…”:

nano /etc/ssh/sshd_config
UsePAM no
PasswordAuthentication no

AllowUsers idcuser

- Change the run-level

nano /etc/init/rc-sysinit.conf
set the run level to 3:

env DEFAULT_RUNLEVEL=3
- The system is designed for Red Hat and SuSe images, so we have to create a script in order to set the network parameters when the image will be provisioned. Create the script:

nano /etc/init.d/change_file
IP=$(sed -n ‘s/.*IPADDR=\(.*\)/\1/p’ /etc/sysconfig/network-scripts/ifcfg-eth0|cut -c1-64)
NETMASK=$( sed -n ‘s/.*NETMASK=\(.*\)/\1/p’ /etc/sysconfig/network-scripts/ifcfg-eth0| cut -c1-64)
GATEWAY=$( sed -n ‘s/.*GATEWAY=\(.*\)/\1/p’ /etc/sysconfig/network-scripts/ifcfg-eth0| cut -c1-64)
HW=$(sed -n ‘s/.*HWADDR=\(.*\)/\1/p’ /etc/sysconfig/network-scripts/ifcfg-eth0|cut -c1-64)
cp /etc/init.d/ubuntu_interfaces_0 /etc/init.d/ubuntu_interfaces
sed -i ‘s/IP/’$IP’/’ /etc/init.d/ubuntu_interfaces
sed -i ‘s/MASK/’$NETMASK’/g’ /etc/init.d/ubuntu_interfaces
sed -i ‘s/GATEWAY/’$GATEWAY’/g’ /etc/init.d/ubuntu_interfaces
mv /etc/init.d/ubuntu_interfaces /etc/network/interfaces
/etc/init.d/networking restart

chmod +x /etc/init.d/change_file

Create the template file:

nano /etc/init.d/ubuntu_interfaces_0
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address IP
netmask MASK
gateway GATEWAY
Add the script to the initalization scripts:

apt-get install rcconf
rcconf
Add the script change_file

Create the original network file from Red Hat:

mkdir -p /etc/sysconfig/network-scripts/
nano /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
BOOTPROTO=static
HWADDR=DE:AD:BE:82:11:14
IPADDR=170.224.195.211
NETMASK=255.255.248.0
ONBOOT=yes
GATEWAY=170.224.192.1

- Check that you can connect the VM by ssh (service running, firewall…)

2.- The second step is setting up the xml files, these files are the ones that the system uses for running the image.

- The best way to do it is getting these files from an image from the IBM catalogue. I used a Red Hat image. in order to do this you have to create a persistent storage (in this case in Canada) and run the following command: (requirements: we have to use an image saved in our private catalogue and with the “allow copy”

sh ic-copy-to.sh -u user_name -w passphrase -g pass_file -I private_image_id -v volumme_id
With this command we will have the Red Hat Enterprise 6 from Canada copied on a persistent storage on Canada

Anyway a copy of the original files is here.

-You need to create a folder called “image” in the persistent storage. (I use to mount the persistent storage on /mnt)

mkdir /mnt/image
-Now, copy all the files from the Red Hat image that you downloaded before to this folder (except BSS.zip, *.img and *.mf files)

- Create a file named BSS.xml and add these lines :

nano BSS.xml

xmlns:ns2="http://www.ibm.com/cloud/storage/xml">
Ubuntu Server 10.04 64 bits
Ubuntu Server 10.04 64 bits
1003

1.2
Ubuntu Server 10.04 64 bits
COP64,BRZ64,SLV64,GLD64


and zip it.

zip BSS.zip BSS.xml
- We need to rename and edit the *.ovf. The new name will be ubuntu.ovf, and we need to edit and change the chain:

To

You can download the files I used from here.

- Upload the VM file to the machine which has the persistent storega linked. TRICK, to avoid disconnections and re-uploads, you should un-comment this line in the sshd_config file:

nano /etc/ssh/sshd_config
Subsystem sftp /usr/libexec/openssh/sftp-server
if you are a Linux user, use rsync to upload the VM file.

- If you have your VM file in ovf (raw format) you don´t need to perform this step, if not:

yum install qemu*
qemu-img convert -O raw /home/idcuser/ubuntu.vmdk /mnt/image/ubuntu.img
- Copy FileValidation.sh to the image folder.

- Run it

sh FileValidation.sh
Choose 2 option to generate the .mf file, it takes a while. And later re-run again to check that it is everything ok, but this time choose first option (just check)

- Assign the right permissions:

Chown root:root –R /mnt/image
Chmod 755 –R /mnt/image
I assume that there weren´t problems till here.

- Delete the machine in order the persistent storage will be detached.

3.- The third step is importing the image, once the persistent storage is detached you have to run this commenad:

sh ic-import-image.sh -u user_name -w passphrase -g pass_file -v id_of_volumme_which_contains_the_VM_file -n name_for_the_new_image
And…. Done in some minutes we will have our new image in our private catalogue. If we want to provision it, we just have to do it as we would do with a common image.

Thanks for helping me:

Tomoyuki Niijima

Hans Moen

Cloud Computing0 comments

How to access to SmartCloud Object Storage

Hey guys,

There are three ways to use SmartCloud Object Storage:

- API

- CloudNas

- Web Access (Beta)

- Third parties tool

If we want to use the API we need to see the documentation in Nirvanix, but we need to change the domain in all request:

http://services.nirvanix.com

by

http://services.smartcloudobjectstorage.com

If we want to access through CloudNAS, you can do via Windows or Linux:

Windows:

1.- Download the Windows CloudNAS application from here: http://services.nirvanix.com/cloudnas/nirvanixApps/win/cloudnas.html
2.- Rename the file to an “.exe” format.
3.- Download the IBM SmartCloud Enterprise object storage SSL Certificate from this link:

http://services.smartcloudobjectstorage.com/ws/Product/GetNewSSLCertificate.ashx

4.- Create this folder “Program Files\Nirvanix\Nirvanix CloudNAS\etc\certs” .
5.- Copy the IBM SmartCloud Enterprise object storage SSL Certificate to the “Program Files\Nirvanix\Nirvanix CloudNAS\etc\certs” folder
6.- Install CloudNAS but DO NOT complete the configuration.
7.- With a text editor, add the configuration setting to the file cloudnas.conf (you have to create it in the folder “Program Files\Nirvanix\Nirvanix CloudNAS\etc\ ” )

CloudNAS.MountPoint=T
Log.File.Dir=C:\Program Files\Nirvanix\Nirvanix CloudNAS\log
Cache.BasePath=C:\Documents and Settings\All Users\Application Data\nirvanix
Nirvanix.WS.ServicesUrl=http://services.smartcloudobjectstorage.com

8.- Change the permissions over the Nirvanix folder (right click over the “Program Files/Nirvanix” folder, and left click over properties). Set full control to the users in that host.

9.- Complete the configuration process for CloudNAS, Start-> Nirvanix-> Nirvanix-> CloudNAS Configuration.
The login details are:

Username: your_usernmae_in_the_child_account
Password: password
Application Name: app_name_of_the_app_pool
Application Key: app_key_of_the_app_pool

10.- Access from Mi PC

Linux:

1.- Download the Linux CloudNAS application from here: 2.- Rename the file to an “.exe” format.

http://services.nirvanix.com/cloudnas/nirvanixApps/linux/cloudnas.html

3.- Download the IBM SmartCloud Enterprise object storage SSL Certificate from this link:

http://services.smartcloudobjectstorage.com/ws/Product/GetNewSSLCertificate.ashx

4.- Create this folder “/opt/cloudnas/etc/certs ” .
sudo mkdir /opt/cloudnas/etc/certs
5.- Copy the IBM SmartCloud Enterprise object storage SSL Certificate to the “/opt/cloudnas/etc/certs ” folder.
cp start-smartcloudobjectstorage.cer /opt/cloudnas/etc/certs
6.- Install CloudNAS:

sudo bash
modprobe fuse
cd cloudnas
sh cloudnas-install
7.- Configure CloudNAS. Edit /opt/cloudnas/etc/cloudnas.conf , and add the following:

nano /opt/cloudnas/etc/cloudnas.conf

and add the following:

Log.Syslog.Enable=true
Nirvanix.WS.ServicesUrl=http://services.smartcloudobjectstorage.com
Run:

sh cloudnas-config

8.- n order to restart the service:

/etc/init.d/cloudnasd restart

And if you run :

df –h

You will see that there a new device mounted on “nirvanix” directory.

Web access (Beta)

If you want to access via web you can do it via this url:

https://www.nirvanixtest.com/objectstorageclient

Publish the files via URL

In order to publish a link with the file, there is a URL:

http://www.nirvanixtest.com/quicksend/

Cloud Computing0 comments

Howto tune Apache2 and PHP

Well…I needed to improve the performance of my Apache server in Amazon EC2 for our PHP app, so that’s what I did…I’m sure there are a lot of ways to do better and more things than I post here…but…it works for me hehe:

First of all we need to install eAccelerator, this is for PHP:

apt-get install php5-dev
wget http://bart.eaccelerator.net/source/0.9.6.1/eaccelerator-0.9.6.1.tar.bz2
tar -xvjf eaccelerator-0.9.6.1.tar.bz2
cd eaccelerator-0.9.6.1
phpize
./configure –enable-eaccelerator=shared
make

make install
Now We edit /etc/php5/apache2/php.ini , and add the following at the end of file:

zend_extension = “/usr/lib/php5/20090626+lfs/eaccelerator.so”

eaccelerator.shm_size = “64″

eaccelerator.cache_dir = “/var/cache/eaccelerator”

eaccelerator.enable = “1″ eaccelerator.optimizer = “1″

eaccelerator.check_mtime = “1″

eaccelerator.debug = “0″

eaccelerator.filter = “”

eaccelerator.shm_max = “0″

eaccelerator.shm_ttl = “0″

eaccelerator.shm_prune_period = “0″

eaccelerator.shm_only = “0″

eaccelerator.compress = “1″

eaccelerator.compress_level = “9″

eaccelerator.allowed_admin_path = “/var/www/eaccelerator”

and set the “memory_limit”=32M (this is in my case, I hope php process don’t weight more than this).

In this case I’m going to use an instances in EC2 that have more than one CPU, so I’m going to use mpm-worker in Apache2, for doing this working with php5…let’s do this:

apt-get install apache2-mpm-worker libapache2-mod-fcgid
a2enmod fcgid
aptitude install php5-cgi php5-cli
And add the following lines in the directory section on virtualhost conf file of the app:

AddHandler fcgid-script .php

FCGIWrapper /usr/lib/cgi-bin/php5 .php

Options +ExecCGI

Ok, we are finishing…we just need adjust some parameters in the mpm-worker. Some sites says that you need to know how many memory there are free, and how many memory weights a phpn process…I really dont care, cause I’m goingto use autoscale…so when the server gets around 80% of load I bring it up another one, butthe parameters that I tested are the following:

ServerLimit 512

Timeout 20

KeepAlive On

MaxKeepAliveRequests 1000

KeepAliveTimeout 2

StartServers 10

MinSpareThreads 50

MaxSpareThreads 150

ThreadLimit 256

ThreadsPerChild 64

MaxClients 512

MaxRequestsPerChild 0

References:

http://developer.mindtouch.com/en/kb/Improve_PHP_performance_with_eAccelerator_on_Ubuntu_8.04_(Debian)

http://ubuntuforums.org/showthread.php?t=1038416

http://www.bootstrappingindependence.com/technology/how-to-improve-website-performance-with-drupal-php-mysql-and-apache/

Cloud Computing0 comments

Howto bundle an EC2 Amazon image (not EBS)

Ok, I’m sure that you build an image (not EBS) on Amazon EC2 and after all you wanted to save it…how?

On the Ami in Amazon EC2 lets make the following.

1- lets going to install Amazon EC2 API:

perl -pi -e ‘s%(universe)$%$1 multiverse%’ /etc/apt/sources.list

apt-get update

apt-get install ec2-api-tools ec2-ami-tools

2- We need to set some environment variables, this variables are pointing to a directory where the certeficates are stored, we need the key that you downloaded to access your images, and the x.509 certificate.

export EC2_PRIVATE_KEY=/mnt/pk-K5AHxxxxxxxxxxxxxxxxxx.pem

export EC2_CERT=/mnt/cert-K5Axxxxxxxxxxxxxxxxxx.pem

export EC2_ACCNO=9xxx-6xxx-7xxx

export ACCESS_KEY=AKIAJxxxxxxxxxx

export SECRET_KEY=2h/xxxxxxxxxxxxxNKIxxj/xxxx

3.- Bundling the AMI: This step will create an image and break it into different part 10MB each and encrypt them.(note:if it is an AMD64 system replace i386 by x86_64)

ec2-bundle-vol -d /mnt -k $EC2_PRIVATE_KEY -c $EC2_CERT -u $EC2_ACCNO -r i386

4- Create a bucket on S3: In this bucket our newly created AMI will be saved

5- Upload AMI to S3:

ec2-upload-bundle -b YOUR-S3-BUCKET -m /mnt/image.manifest.xml -a $ACCESS_KEY -s $SECRET_KEY

6- Register the AMI: Your image must be registered with Amazon EC2, so Amazon can locate it and run instances based on it. In this process your newly created AMI will get unique AMI Id:

ec2-register –K $EC2_PRIVATE_KEY –C $EC2_CERT YOUR-S3-BUCKET/image.manifest.xml –region eu-west-1

Refrences:

http://patodirahul.blogspot.com/2011/03/create-ami.html

http://alestic.com/2009/06/ec2-ami-bundle

http://www.hennepintech.edu/techservices/pages/590

Cloud Computing0 comments

How to fix problems in the replications dataservers infrastructure

When you get an error like this:

[ERROR] Got fatal error 1236: ‘Could not find first log file name in binary log index file’ from master when reading data from binary log
[Note] Slave I/O thread exiting, read up to log ‘mysql_binary_log.000004′, position 465
The solutions is simple…let’s see:

In server1:

FLUSH TABLES WITH READ LOCK;

SHOW MASTER STATUS; (remember the result)

In other console:

mysqldump -u root -p –extended-insert –all-databases > /tmp/backup.sql

and again in the MySQL promt:

unlock tables

On the server2 we copy the backup, and :

stop slave ;

mysql -u root -p-h localhost< backup.sql

CHANGE MASTER TO MASTER_HOST=’ip_server1′, MASTER_USER=’replica_user’, MASTER_PASSWORD=’password_replica_user’, MASTER_LOG_FILE=’log_file_showned_in_the_status_of_the_server1′, MASTER_LOG_POS=’position_showned_in_the_status_of_the_server1′;

start slave;

And the problem is solved. You need to to the same but in viceversa.

References:

http://forums.mysql.com/read.php?26,223923,224042#msg-224042

Cloud Computing0 comments

Howto to make a first approach to database autoscale in AMAZON EC2

What I want to try is have an infrastructure of two data servers behind a load balancer. One master and One slave. They will be replicating data one against another, so in every moment the two servers will have the same data. I have to warn that I have no idea of data servers hehe, I’m doing this, cause I need to do it.

We create the database and the users in MySQL:

CREATE DATABASE trial;
CREATE USER ‘trial_user’@'%’ IDENTIFIED BY ‘PASS’;
GRANT ALL PRIVILEGES ON trial.* TO ‘trial_user’@'%’ WITH GRANT OPTION;

That is the database that we want to replicate. Now we need to create replication user, we do the next in both servers:

GRANT REPLICATION SLAVE ON *.* TO ‘slave_user’@'%’ IDENTIFIED BY ‘slave_password’;

FLUSH PRIVILEGES; quit;

Now we set up master-master replication in /etc/mysql/my.cnf. The crucial configuration options for master-master replication are auto_increment_increment and auto_increment_offset:

auto_increment_increment controls the increment between successive AUTO_INCREMENT values.
auto_increment_offset determines the starting point for AUTO_INCREMENT column values.
Let’s assume we have N MySQL nodes (N=2 in this example), then auto_increment_increment has the value N on all nodes, and each node must have a different value for auto_increment_offset (1, 2, …, N).

Now let’s configure our two MySQL nodes:

nano /etc/mysql/my.cnf

[...] [mysqld] server-id = 1
replicate-same-server-id = 0
auto-increment-increment = 2
auto-increment-offset = 1

master-host = ip_server2
master-user = slave_user
master-password = slave_password
master-connect-retry = 60
replicate-do-db = trial

log-bin = /var/log/mysql/mysql-bin.log
binlog-do-db = trial

relay-log = /var/lib/mysql/slave-relay.log
relay-log-index = /var/lib/mysql/slave-relay-log.index

expire_logs_days = 10
max_binlog_size = 500M
[...] and restart the database:

/etc/init.d/mysql restart
We do the same in the second server, just changing the parameter:

server-id = 2
replicate-same-server-id = 0
auto-increment-increment = 2
auto-increment-offset = 2
master-host = ip_server1
and restart the database.

Now in server1, in the MySQL prompt:

USE exampledb;
FLUSH TABLES WITH READ LOCK;
SHOW MASTER STATUS;
and it should appear something like that:

+——————+———-+————–+——————+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+——————+———-+————–+——————+
| mysql-bin.000009 | 98 | trial | |
+——————+———-+————–+——————+
1 row in set (0.00 sec)

Now don’t leave the MySQL shell, because if you leave it, the database lock will be removed, and this is not what we want right now because we must create a database dump now. While the MySQL shell is still open, we open a second command line window where we create the SQL dump snapshot.sql and transfer it to server2 (using scp):

cd /tmp
mysqldump -u root -pyourrootsqlpassword –opt trial > snapshot.sql
scp snapshot.sql root@ip_server2:/tmp
we go back to MySQL prompt and:

UNLOCK TABLES;
quit;
On server2, we can now import the SQL dump snapshot.sql like this:

/usr/bin/mysqladmin –user=root –password=yourrootsqlpassword stop-slave
cd /tmp
mysql -u root -pyourrootsqlpassword trial < snapshot.sql
And here is the key, in server2 we need to make the server2 slave of server1:

FLUSH TABLES WITH READ LOCK;
UNLOCK TABLES;
CHANGE MASTER TO MASTER_HOST=’ip_server1′, MASTER_USER=’slave_user’, MASTER_PASSWORD=’slave_password’, MASTER_LOG_FILE=’mysql-bin.000009′, MASTER_LOG_POS=98;
START SLAVE;
We need to repeat the proccess but in reverse, to make server1 slave od server2.

References:

http://www.howtoforge.com/mysql5_master_master_replication_debian_etch

Cloud Computing Linux0 comments