HPC versus HDFS: Scientific versus Social

There have been rumblings from the HPC community indicating a general suspicion of and disdain for Big Data technology which would lead one to believe that whatever Google, Facebook and Twitter do with their supercomputers is not important enough to warrant seriousness—that social supercomputing is simply not worthy.  A little of this emotion seems to […]

Quiz night

Here’s a little puzzle that came up on OTN recently.  (No prizes for following the URL to find the answer) (Actually, no prizes anyway). There’s more in the original code sample than was really needed, so although I’ve done a basic cut and paste from the original I’ve also eliminated a few lines of the text:


execute dbms_random.seed(0)

create table t
as
select rownum as id,
       100+round(ln(rownum/3.25+2)) aS val2,
       dbms_random.string('p',250) aS pad
from dual
connect by level <= 1000
order by dbms_random.value;

begin
  dbms_stats.gather_table_stats(ownname          => user,
                                tabname          => 'T',
                                method_opt       => 'for all columns size 254'
  );
end;
/

column endpoint_value format 9999
column endpoint_number format 999999
column frequency format 999999

select endpoint_value, endpoint_number,
       endpoint_number - lag(endpoint_number,1,0)
                  OVER (ORDER BY endpoint_number) AS frequency
from user_tab_histograms
where table_name = 'T'
and column_name = 'VAL2'
order by endpoint_number
;

alter session set optimizer_mode = first_rows_100;

explain plan set statement_id '101' for select * from t where val2 = 101;
explain plan set statement_id '102' for select * from t where val2 = 102;
explain plan set statement_id '103' for select * from t where val2 = 103;
explain plan set statement_id '104' for select * from t where val2 = 104;
explain plan set statement_id '105' for select * from t where val2 = 105;
explain plan set statement_id '106' for select * from t where val2 = 106;

select statement_id, cardinality from plan_table where id = 0;

The purpose of the method_opt in the gather_table_stats() call is to ensure we get a frequency histogram on val2; and the query against the user_tab_columns view should give the following result:


ENDPOINT_VALUE ENDPOINT_NUMBER FREQUENCY
-------------- --------------- ---------
           101               8         8
           102              33        25
           103             101        68
           104             286       185
           105             788       502
           106            1000       212

Given the perfect frequency histogram, the question then arises why the optimizer seems to calculate incorrect cardinalities for some of the queries; the output from the last query is as follows:


STATEMENT_ID                   CARDINALITY
------------------------------ -----------
101                                      8
102                                     25
103                                     68
104                                    100           -- expected prediction 185
105                                    100           -- expected prediction 502
106                                    100           -- expected prediction 212

I’ve disabled comments so that you can read the answer at OTN if you want to – but see if you can figure out the reason before reading it. (This reproduces on 11g and 12c – and probably on earlier versions all the way back to 9i).

I haven’t done anything extremely cunning with hidden parameters, materialized views, query rewrite, hidden function calls, virtual columns or any other very dirty tricks, by the way.


Announcement: Singapore Oracle Sessions

When I knew that the ACE Director, Bjoern Rost of Portrix Systems was coming to Singapore on his way to begin the OTN APAC tour, I suggested he stay at mine for a few days and sample all that Singapore has to offer.

Then a thought occurred to me. While he was here, why not setup an informal Oracle users meetup, much like the various ones at cities around the world like Sydney, Birmingham and London (to name but three I'm aware of). Morten Egan, my new colleague and Oak Table luminary had already suggested to me months ago that we should get something going in Singapore, so why not start now?

Well, in a matter of a few days, we've put together an agenda, a room, we will be having pizza and beer and other drinks and three hopefully useful sessions from experienced speakers. 

Here is the agenda (SingaporeOracleSessions.pdf) and a map (SOSMap.pdf) to help you get to the venue which is very handily placed near Bugis MRT. All that's required to register is to email me at dougburns at Yahoo. There are currently 21 people registered but the room holds (believe it or not) 42, so spread the word!

Hopefully, it's just the beginning ....

12.1.0.2 Introduction to Zone Maps Part II (Changes)

In Part I, I discussed how Zone Maps are new index like structures, similar to Exadata Storage Indexes, that enables the “pruning” of disk blocks during accesses of the table by storing the min and max values of selected columns for each “zone” of a table. A Zone being a range of contiguous (8M) blocks. I […]

Percona Live London 2014

Percona Live London takes place next week from November 3-4 where Pythian is a platinum sponsor—visit us at our booth during the day on Tuesday, or at the reception in the eveningl. Not only are we attending, but we’re taking part in exciting speaking engagements, so be sure to check out our sessions and hands-on labs. Find those details down below.

 

MySQL Break/Fix Lab by Miklos Szel, Alkin Tezuysal, and Nikolaos Vyzas

Monday November 3 — 9:00AM-12:00PM
Cromwell 3 & 4

Miklos, Alkin, and Nikolaos will be presenting a hands-on lab by demonstrating an evaluation of operations errors and issues in MySQL 5.6, and recovering from them. They will be covering instance crashes and hangs, troublesehooting and recovery, and significant performance issues. Find out more about the speakers below.

About Miklos: Miklos Szel is a Senior Engineer at Pythian, based in Budapest. With greater than 10 years’ experience in system and network administration, he has also worked for Walt Disney International as its main MySQL DBA. Miklos specializes in MySQL-based high availability solutions, performance tuning, and monitoring, and has significant experience working with large-scale websites.

About Alkin: Alkin Tezuysal has extensive experience in enterprise relational databases, working in various sectors for large corporations. With greater than 19 years’ of industry experience, he has been able to work on large projects from the group up to production. In recent years, he has been focusing on eCommerce, SaaS, and MySQL technologies.

About Nikolaos: Nik Vyzas is a Lead Database Consultant at Pythian, and an avid open source engineer. He began his career as a software developer in South Africa, and moved into technology consulting firms for various European and US-based companies. He specializes in MySQL, Galera, Redis, MemcacheD, ad MongoDB on many OS platforms.

 

Setting up Multi-Source Replication in MariaDB 10 by Derek Downey

Monday November 3 — 2:00-5:00PM
Cromwell 3 & 4

For a long time, replication in MySQL was limited to only a single master. When MariaDB 10.0 became generally available, the ability to allow multiple masters became a reality. This has opened up the door to previously impossible architectures. In this hands-on tutorial, Derek will discuss some of the features in MariaDB 10.0, demonstrate establishing a four-node environment running on participants’ computer using Vagrant annd VirtualBox, and even discuss some limitations associated with  10.0. Check out Derek’s blog post for more detailed info about his session.

About Derek:Derek began his career as a PHP application developer, working out of Knoxville, Tennessee. Now a Principal Consultant in Pythian’s MySQL practice, Derek is sought after for his deep knowledge of Galera and diagnosing replication issues.

 

Understanding Performance Through Measurement, Benchmarking, and Profiling by René Cannaò

Monday November 3 — 2:00-5:00PM
Orchard 2

It is essential to understand how your system performs at different workloads to measure the impacts of changes and growth and to understand how those impacts will manifest. Measuring the performance of current workloads is not trivial and the creation of a staging environment where different workloads need to be tested has it’s own set of challenges. Performing capacity planning, exploring concerns about scalability and response time and evaluating new hardware or software configurations are all operations requiring measurement and analysis in an environment appropriate to your production set up. To find bottlenecks, performance needs to be measured both at the OS layer and at the MySQL layer: an analysis of OS and MySQL benchmarking and monitoring/measuring tools will be presented. Various benchmark strategies will be demonstrated for real-life scenarios, as well as tips on how to avoid common mistakes.

About René: René has 10 years of working experience as System, Network and Database Administrator mainly on Linux/Unix platform. In recent years, he has been focused mainly on MySQL, previously working as Senior MySQL Support Engineer at Sun/Oracle and now as Senior Operational DBA at Pythian (formerly Blackbird, acquired by Pythian.)

 


Pythian is a global leader in data consulting and managed services. We specialize in optimizing and managing mission-critical data systems, combining the world’s leading data experts with advanced, secure service delivery. Learn more about Pythian’s MySQL expertise.

Metric Thresholds and the Power to Adapt

Metric thresholds have come a long way since I started working with OEM 10g.  I remember how frustrating it could be if an ETL load impacted the metric values that had to be set for a given IO or CPU load for a database when during business hours, a much lower value would be preferable.  Having to explain to the business why a notification wasn’t sent during the day due to the threshold set for resource usage for night time batch processing often went unaccepted.

With EM12c, release 4, we now have Time-based Static thresholds and Adaptive thresholds.  Both are incredibly valuable to ensuring the administrator is aware of issues before they become a problem and not let environments with askew workloads leave them unaware.

Both of these new features are available once you are logged into a target, then from the left side menu, <Target Type Drop Down Below Target Name>, Monitoring, Metric and Collection Settings.  Under the Metrics tab you will find a drop down that can be changed from the default of Metrics with Thresholds to Time-based Static and Adaptive Thresholds which will allow you to view any current setup for either of these advanced threshold management.

adv_thresh_page2

To access the configuration, look below on the page for the Advanced Threshold Management link-

adv_thresh_page3

Time-Based Static Thresholds

The concept behind Time-based Static thresholds is that you have very specific workloads in a 24hr period and you wish to set thresholds based on the resource cycle.  This will require the administrator to be very familiar with the workload to set this correctly.  I understand this model very well, as most places I’ve been the DBA for, I was known for memorizing EXACTLY the standard flow of resource usage for any given database.

In the Time-based Static Threshold tab from the Metrics tab, we can configure, per target, (host, database, cluster) the thresholds by value and time that makes sense for the target by clicking on Register Metrics.

This will take you to a Metric Selector page that will help you set up the time-based static thresholds for the target and remember, this is target specific.  You can choose to set up as many metrics for a specific target or just one or two.  The search option allows for easy access to the metrics.

adaptive12

Choose which metrics you wish to set the time-based static thresholds for and click OK.

You can then set the values for each metric that was chosen for weekday or weekend, etc.

adaptive13

You will be warned that your metric thresholds will not be set until you hit the Save button.  Note: You won’t be able to click on it until you close this warning, as the Save button is BEHIND the pop-up warning.

If the default threshold changes for weekday day/night and weekend day/night are not adequate to satisfy the demands of the system workload, you can edit and change these to be more definitive-

adv_thrhld5

Once you’ve chosen the frequency change, you can then set up the threshold values for the more comprehensive plan and save the changes.  That’s all there is to it, but I do recommend tweaking as necessary if any “white noise” pages result from the static settings.

Removing Time-based Static Thresholds

To remove a time-based threshold for any metric(s), click on the select for each metric with thresholds that you wish to remove and click the Remove button.  You will be asked to confirm and the metric(s) time-based static threshold settings will be reverted to the default values or to values set in a default monitoring template for the target type.

Adaptive Thresholds

Unlike the Time-based Static Thresholds, which are based off of settings configured manually, Adaptive Thresholds source their threshold settings off of a “collected” baseline.  This is more advanced than static set thresholds as it takes the history of the workload collected in a baseline into consideration when calculating the thresholds.  The most important thing to remember is to ensure to use a baseline that includes a clear example of a standard workload of the system in the snapshot.

There are two types of baselines, static and moving.  A static baseline is for a given snapshot of time and does not change.  A moving baseline is recollected on a regular interval and can be for anywhere from 7-31 days.

The reason to use a moving baseline over a static one is that a moving baseline will incorporate changes to the workload over time, resulting in a system that has metric growth to go with system growth.  The drawback?  If there is a problem that happens on a regular interval, you may not catch it, where the static baseline could be verified and not be impacted by this type of change.

After a baseline of performance metric data has been collected from a target, you can then access the Adaptive Thresholds configuration tab via the Advanced Threshold Management page.

You have the option from the Advanced Threshold Management page to set up the default settings for the baseline type, threshold change frequency and how long the accumulation of baseline data should be used to base the adaptive threshold value on.

adaptive11

Once you choose the adaptive settings you would like to make active, click on the Save button to keep the configuration.

Now let’s add the metrics we want to configure adaptive thresholds for by clicking on Register Metrics-

adaptive14

You will be taken to a similar window that you saw for the Time-based Static Thresholds.  Drill down in the list and choose the metrics that could benefit from an adaptive threshold setting and once you are done choosing all the metrics that you want from the list, click on OK.

Note:  Once you hit OK, there is no other settings that have to be configured.  Cloud Control will then complete the configuration, so ensure you have the correct you wish to have registered for the target.

adaptive15

Advanced Reporting on Adaptive Thresholds

For any adaptive threshold that you have register, you can click on the Select, (on the right side of the Metric list) and view analysis of the threshold data to see how the adaptive thresholds are supporting the metric.

adaptvie16

You can also test out different values and preview how they will support the metric and decide if you want to move away from an adaptive threshold and to a static one.

You can also choose click on the Test All which will look at previous data and see how the adaptive thresholds will support in theory in the future by how data in the baseline has been analyzed for the frequency window.

For my metric, I didn’t have time behind my baseline to give much in the way of a response, but the screenshot gives you an idea of what you will be looking at-

adaptvie16

Removing Adaptive Thresholds

If there is a metric that you wish to no longer have a metric threshold on, simply put a check mark in the metric’s Select box and then click on Deregister-

adaptive17

You will be asked if you want to continue, click Yes and the adaptive threshold will be removed from the target for the metric(s) checked.

Advanced threshold management offers the administrator a few more ways to gain definitive control over monitoring of targets via EM12c.  I haven’t found an environment yet that didn’t have at least one database or host that could benefit from these valuable features.

 

 

 

 



Tags:  

Del.icio.us
Facebook
TweetThis
Digg
StumbleUpon


Copyright © DBA Kevlar [Metric Thresholds and the Power to Adapt], All Right Reserved. 2014.

A World View

I’ve mentioned this before, but I thought I would show something visual…

The majority of my readers come from the USA and India. Since they are in different time zones, it spreads the load throughout the day. When I wake up, India are dominant.

MorningTraffic

In the afternoon the USA come online, by which time Russia have given up, but there is still a hardcore of Indian’s going for it! :)

AfternoonTraffic

I haven’t posted an evening shot as it’s the same as the afternoon one. Don’t you folks in India ever sleep?

I’m sure this is exactly the same with all other technology-related websites, but it does make me pause for thought occasionally. Most aspects of our lives are so localised, like traffic on the journey to work or family issues. It’s interesting to stop and look occasionally at the sort of reach this internet thing has given us. It may be a little rash, but I predict this interwebs thing might just catch on!

Cheers

Tim…


A World View was first posted on October 29, 2014 at 8:53 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Deploying a Private Cloud at Home — Part 7

Welcome to part 7, the final blog post in my series, Deploying Private Cloud at Home, where I will be sharing the scripts to configure controller and computer nodes. In my previous post, part six, I demonstrated how to configure the controller and compute nodes.

Kindly update the script with the password you want and then execute. I am assuming here that this is a fresh installation and no service is configured on the nodes.

Below script configures controller node, and has two parts

  1. Pre compute node configuration
  2. Post compute node configuration

The “config-controller.sh -pre” will run the pre compute node configuration and prepare the controller node and OpenStack services. “config-controller.sh -post” will run the post compute node configuration of the controller node as these services are dependant of compute node services.

config-controller.sh

#!/bin/bash
#Configure controller script v 4.4
#############################################
# Rohan Bhagat             ##################
# Email:Me at rohanbhagat.com ###############
#############################################
#set variables used in the configuration
#Admin user password
ADMIN_PASS=YOUR_PASSWORD
#Demo user password
DEMO_PASS=YOUR_PASSWORD
#Keystone database password
KEYSTONE_DBPASS=YOUR_PASSWORD
#Admin user Email
ADMIN_EMAIL=YOUR_EMAIL
#Demo user Email
DEMO_EMAIL=YOUR_EMAIL
#Glance db user pass
GLANCE_DBPASS=YOUR_PASSWORD
#Glance user pass
GLANCE_PASS=YOUR_PASSWORD
#Glance user email
GLANCE_EMAIL=YOUR_EMAIL
#Nova db user pass
NOVA_DBPASS=YOUR_PASSWORD
#Nova user pass
NOVA_PASS=YOUR_PASSWORD
#Nova user Email
NOVA_EMAIL=YOUR_EMAIL
#Neutron db user pass
NEUTRON_DBPASS=YOUR_PASSWORD
#Neutron user pass
NEUTRON_PASS=YOUR_PASSWORD
#Neutron user email
NEUTRON_EMAIL=YOUR_EMAIL
#Metadata proxy pass
METADATA_SECRET=YOUR_PASSWORD
#IP to be declared for controller
MY_IP=192.168.1.140
#FQDN for controller hostname or IP
CONTROLLER=controller
#MYSQL root user pass
MYSQL_PASS=YOUR_PASSWORD
#Heat db user pass
HEAT_DBPASS=YOUR_PASSWORD
#Heat user pass
HEAT_PASS=YOUR_PASSWORD
#Heat user email
HEAT_EMAIL=YOUR_EMAIL
#IP range for VM Instances
RANGE=192.168.1.16\/28
#Secure MySQL
MYSQL_ROOT_PASSWORD=YOUR_PASSWORD
#Current MySQL root password leave blank if you have not configured MySQL
CURNT_PASS=""



# Get versions:
SCRIPT_VER="v4.4"
if [ "$1" = "--version" -o "$1" = "-v" ]; then
	echo "`basename $0` script version $SCRIPT_VER"
  exit 0
elif [ "$1" = "" ] || [ "$1" = "--help" ]; then
  echo "Configures controller node with pre compute and post compute deployment settings"
  echo "Usage:"
  echo "       `basename $0` [--help | --version | -pre | -post]"
  exit 0

elif [ "$1" = "-pre" ]; then

echo "============================================="
echo "This installation script is based on OpenStack icehouse guide"
echo "Found http://docs.openstack.org/icehouse/install-guide/install/yum/content/index.html"
echo "============================================="

echo "============================================="
echo "controller configuration started"
echo "============================================="

echo "Installing MySQL packages"
yum install -y mysql mysql-server MySQL-python
echo "Installing RDO OpenStack repo"
yum install -y http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/rdo-release-icehouse-4.noarch.rpm
echo "Installing openstack keystone, qpid Identity Service, and required packages for controller"
yum install -y yum-plugin-priorities openstack-utils mysql mysql-server MySQL-python qpid-cpp-server openstack-keystone python-keystoneclient expect


echo "Modification of qpid config file"
perl -pi -e 's,auth=yes,auth=no,' /etc/qpidd.conf
chkconfig qpidd on
service qpidd start


echo "Configuring mysql database server"
cat > /etc/my.cnf <&1 | grep -q token_flush) || echo '@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush.log 2>&1' >> /var/spool/cron/keystone

echo "Define users, tenants, and roles"
export OS_SERVICE_TOKEN=$ADMIN_TOKEN
export OS_SERVICE_ENDPOINT=http://$CONTROLLER:35357/v2.0

echo "keystone admin creation"
keystone user-create --name=admin --pass=$ADMIN_PASS --email=$ADMIN_EMAIL
keystone role-create --name=admin
keystone tenant-create --name=admin --description="Admin Tenant"
keystone user-role-add --user=admin --tenant=admin --role=admin
keystone user-role-add --user=admin --role=_member_ --tenant=admin


echo "keystone demo creation"
keystone user-create --name=demo --pass=$DEMO_PASS --email=$DEMO_EMAIL
keystone tenant-create --name=demo --description="Demo Tenant"
keystone user-role-add --user=demo --role=_member_ --tenant=demo
keystone tenant-create --name=service --description="Service Tenant"

echo "Create a service entry for the Identity Service"
keystone service-create --name=keystone --type=identity --description="OpenStack Identity"
keystone endpoint-create --service-id=$(keystone service-list | awk '/ identity / {print $2}') 
--publicurl=http://$CONTROLLER:5000/v2.0 
--internalurl=http://$CONTROLLER:5000/v2.0 
--adminurl=http://$CONTROLLER:35357/v2.0

echo "Verify Identity service installation"
unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT
echo "Request a authentication token by using the admin user and the password you chose for that user"
keystone --os-username=admin --os-password=$ADMIN_PASS 
  --os-auth-url=http://$CONTROLLER:35357/v2.0 token-get
keystone --os-username=admin --os-password=$ADMIN_PASS 
  --os-tenant-name=admin --os-auth-url=http://$CONTROLLER:35357/v2.0 
  token-get

cat > /root/admin-openrc.sh <<EOF
export OS_USERNAME=admin
export OS_PASSWORD=$ADMIN_PASS
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://controller:35357/v2.0
EOF

source /root/admin-openrc.sh
echo "keystone token-get"
keystone token-get
echo "keystone user-list"
keystone user-list
echo "keystone user-role-list --user admin --tenant admin"
keystone user-role-list --user admin --tenant admin

echo "Install the Image Service"
yum install -y openstack-glance python-glanceclient
openstack-config --set /etc/glance/glance-api.conf database connection mysql://glance:$GLANCE_DBPASS@$CONTROLLER/glance
openstack-config --set /etc/glance/glance-registry.conf database connection mysql://glance:$GLANCE_DBPASS@$CONTROLLER/glance

echo "configure glance database"
mysql -uroot -p$MYSQL_PASS -hlocalhost -e "CREATE DATABASE glance;"
mysql -uroot -p$MYSQL_PASS -hlocalhost -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '$GLANCE_DBPASS';"
mysql -uroot -p$MYSQL_PASS -hlocalhost -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '$GLANCE_DBPASS';"

echo "Create the database tables for the Image Service"
su -s /bin/sh -c "glance-manage db_sync" glance

echo "creating glance user"
keystone user-create --name=glance --pass=$GLANCE_PASS --email=$GLANCE_EMAIL
keystone user-role-add --user=glance --tenant=service --role=admin


echo "glance configuration"
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://$CONTROLLER:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_host $CONTROLLER
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_user glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_password $GLANCE_PASS
openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://$CONTROLLER:5000
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_host $CONTROLLER
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_user glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_password $GLANCE_PASS
openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone


echo "Register the Image Service with the Identity service"
keystone service-create --name=glance --type=image --description="OpenStack Image Service"
keystone endpoint-create 
  --service-id=$(keystone service-list | awk '/ image / {print $2}') 
  --publicurl=http://$CONTROLLER:9292 
  --internalurl=http://$CONTROLLER:9292 
  --adminurl=http://$CONTROLLER:9292
  
echo "Start the glance-api and glance-registry services"
service openstack-glance-api start
service openstack-glance-registry start
chkconfig openstack-glance-api on
chkconfig openstack-glance-registry on

echo "Testing image service"
echo "Download the cloud image"
wget -q http://cdn.download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img -O /root/cirros-0.3.2-x86_64-disk.img
echo "Upload the image to the Image Service"
source /root/admin-openrc.sh
glance image-create --name "cirros-0.3.2-x86_64" --disk-format qcow2 
--container-format bare --is-public True 
--progress  < /root/cirros-0.3.2-x86_64-disk.img

echo "Install Compute controller services"
yum install -y openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient
source /root/admin-openrc.sh

echo "Configure compute database"
openstack-config --set /etc/nova/nova.conf database connection mysql://nova:$NOVA_DBPASS@$CONTROLLER/nova

echo "configuration keys to configure Compute to use the Qpid message broker"
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend qpid
openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname $CONTROLLER

source /root/admin-openrc.sh

echo "Set the my_ip, vncserver_listen, and vncserver_proxyclient_address configuration options"
echo "to the management interface IP address of the $CONTROLLER node"
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip $MY_IP
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen $MY_IP
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address $MY_IP

echo "Create a nova database user"
mysql -uroot -p$MYSQL_PASS -hlocalhost -e "CREATE DATABASE nova;"
mysql -uroot -p$MYSQL_PASS -hlocalhost -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '$NOVA_DBPASS';"
mysql -uroot -p$MYSQL_PASS -hlocalhost -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '$NOVA_DBPASS';"

echo "Create the Compute service tables"
su -s /bin/sh -c "nova-manage db sync" nova

echo "Create a nova user that Compute uses to authenticate with the Identity Service"
keystone user-create --name=nova --pass=$NOVA_PASS --email=$NOVA_EMAIL
keystone user-role-add --user=nova --tenant=service --role=admin

echo "Configure Compute to use these credentials with the Identity Service running on the controller"
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://$CONTROLLER:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host $CONTROLLER
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password $NOVA_PASS

echo "Register Compute with the Identity Service"
keystone service-create --name=nova --type=compute --description="OpenStack Compute"
keystone endpoint-create 
  --service-id=$(keystone service-list | awk '/ compute / {print $2}') 
  --publicurl=http://$CONTROLLER:8774/v2/%(tenant_id)s 
  --internalurl=http://$CONTROLLER:8774/v2/%(tenant_id)s 
  --adminurl=http://$CONTROLLER:8774/v2/%(tenant_id)s
  
echo "Start Compute services and configure them to start when the system boots"
service openstack-nova-api start
service openstack-nova-cert start
service openstack-nova-consoleauth start
service openstack-nova-scheduler start
service openstack-nova-conductor start
service openstack-nova-novncproxy start
chkconfig openstack-nova-api on
chkconfig openstack-nova-cert on
chkconfig openstack-nova-consoleauth on
chkconfig openstack-nova-scheduler on
chkconfig openstack-nova-conductor on
chkconfig openstack-nova-novncproxy on  

echo "To verify your configuration, list available images"
echo "nova image-list"
sleep 5
source /root/admin-openrc.sh
nova image-list

fi


if [ "$1" = "-post" ]; then
#set variables used in the configuration

source /root/admin-openrc.sh
############OpenStack Networking start here##############
echo "configure legacy networking"
openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.api.API
openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api nova 

echo "Restart the Compute services"
service openstack-nova-api restart
service openstack-nova-scheduler restart
service openstack-nova-conductor restart

echo "Create the network"
source /root/admin-openrc.sh
nova network-create vmnet --bridge br0 --multi-host T --fixed-range-v4 $RANGE

echo "Verify creation of the network"
nova net-list

############OpenStack Legacy ends##############
echo "Install the dashboard"
yum install -y mod_wsgi openstack-dashboard

echo "Configure openstack dashborad"
sed -i 's/horizon.example.com/*/g' /etc/openstack-dashboard/local_settings
echo "Start the Apache web server and memcached"
service httpd start
chkconfig httpd on

fi

Below is the config-compute.sh script which configures compute node

config-compute.sh

#!/bin/bash
#configure comutue script v4
#############################################
# Rohan Bhagat             ##################
# Email:Me at rohanbhagat.com ###############
#############################################
#set variables used in the configuration
#Nova user pass
NOVA_PASS=YOUR_PASSWORD
#NEUTRON user pass
NEUTRON_PASS=YOUR_PASSWORD
#Nova db user pass
NOVA_DBPASS=YOUR_PASSWORD
FLAT_INTERFACE=eth0
PUB_INTERFACE=eth0
#FQDN for $CONTROLLER hostname or IP
CONTROLLER=controller
#IP of the compute node
MY_IP=192.168.1.142


echo "============================================="
echo "This installation script is based on OpenStack icehouse guide"
echo "Found http://docs.openstack.org/icehouse/install-guide/install/yum/content/index.html"
echo "============================================="

echo "============================================="
echo "compute configuration started"
echo "============================================="

echo "Install the MySQL Python library"
yum install -y MySQL-python


echo "Install the Compute packages"
yum install -y openstack-nova-compute openstack-utils

echo "Edit the /etc/nova/nova.conf configuration file"
openstack-config --set /etc/nova/nova.conf database connection mysql://nova:$NOVA_DBPASS@$CONTROLLER/nova
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://$CONTROLLER:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host $CONTROLLER
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password $NOVA_PASS

echo "Configure the Compute service to use the Qpid message broker"
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend qpid
openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname $CONTROLLER

echo "Configure Compute to provide remote console access to instances"
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip $MY_IP
openstack-config --set /etc/nova/nova.conf DEFAULT vnc_enabled True
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address $MY_IP
openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_base_url http://$CONTROLLER:6080/vnc_auto.html

echo "Specify the host that runs the Image Service"
openstack-config --set /etc/nova/nova.conf DEFAULT glance_host $CONTROLLER

echo "Start the Compute service and its dependencies. Configure them to start automatically when the system boots"
service libvirtd start
service messagebus start
service openstack-nova-compute start
chkconfig libvirtd on
chkconfig messagebus on
chkconfig openstack-nova-compute on

echo "kernel networking functions"
perl -pi -e 's,net.ipv4.ip_forward = 0,net.ipv4.ip_forward = 1,' /etc/sysctl.conf
perl -pi -e 's,net.ipv4.conf.default.rp_filter = 1,net.ipv4.conf.default.rp_filter = 0,' /etc/sysctl.conf
echo "net.ipv4.conf.all.rp_filter=0" >> /etc/sysctl.conf
sysctl -p

echo "Install legacy networking components"
yum install -y openstack-nova-network openstack-nova-api
sleep 5
echo "Configure legacy networking"
openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.api.API
openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api nova
openstack-config --set /etc/nova/nova.conf DEFAULT network_manager nova.network.manager.FlatDHCPManager
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.libvirt.firewall.IptablesFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT network_size 254
openstack-config --set /etc/nova/nova.conf DEFAULT allow_same_net_traffic False
openstack-config --set /etc/nova/nova.conf DEFAULT multi_host True
openstack-config --set /etc/nova/nova.conf DEFAULT send_arp_for_ha True
openstack-config --set /etc/nova/nova.conf DEFAULT share_dhcp_address True
openstack-config --set /etc/nova/nova.conf DEFAULT force_dhcp_release True
openstack-config --set /etc/nova/nova.conf DEFAULT flat_network_bridge br0
openstack-config --set /etc/nova/nova.conf DEFAULT flat_interface $FLAT_INTERFACE
openstack-config --set /etc/nova/nova.conf DEFAULT public_interface $PUB_INTERFACE

echo "Start the services and configure them to start when the system boots"
service openstack-nova-network start
service openstack-nova-metadata-api start
chkconfig openstack-nova-network on
chkconfig openstack-nova-metadata-api on

echo "Now restart networking"
service network restart

echo "Compute node configuration competed"
echo "Now you can run config-congroller.sh -post on the controller node"
echo "To complete the OpenStack configuration"

OTN APAC Tour 2014 : It’s Nearly Here!

airplane-flying-through-clouds-smallIn a little less than a week I start the OTN APAC Tour. This is where I’m going to be…

  • Perth, Australia : November 6-7
  • Shanghai, China : November 9
  • Tokyo, Japan : November 11-13
  • Beijing, China : November 14-15
  • Bangkok, Thailand : November 17
  • Auckland, New Zealand : November 19-21

Just looking at that list is scary. When I look at the flight schedule I feel positively nauseous. I think I’m in Bangkok for about 24 hours. It’s sleep, conference, fly. :)

After all these years you would think I would be used to it, but every time I plan a tour I go through the same sequence of events.

  • Someone asks me if I want to do the tour.
  • I say yes and agree to do all the dates.
  • They ask me if I am sure, because doing the whole tour is a bit stupid as it’s a killer and takes up a lot of time.
  • I say, no problem. It will be fine. I don’t like cherry-picking events as it makes me feel guilty, like I’m doing it for a holiday or something.
  • Everything is provisionally agreed.
  • I realise the magnitude of what I’ve agreed to and secretly hope I don’t get approval.
  • Approval comes through.
  • Mad panic for visas, flights and hotel bookings etc.
  • The tour starts and it’s madness for X number of days. On several occasions I will want to throw in the towel and get on a plane home, but someone else on the tour will provide sufficient counselling to keep me just on the right side of sane.
  • Tour finishes and although I’ve enjoyed it, I promise myself I will never do it again.

With less than a week to go, I booked the last of my hotels this morning, so you can tell what stage I’m at now… :)

I was reflecting on this last night and I think I know the reason I agree to these silly schedules. When I was a kid, only the “posh” kids did foreign holidays. You would come back from the summer break and people would talk about eating pasta on holiday and it seemed rather exotic. Somewhere in the back of my head I am still that kid and I don’t really believe any of these trips will ever really happen, so I agree to anything. :)

Cheers

Tim…

 

 


OTN APAC Tour 2014 : It’s Nearly Here! was first posted on October 29, 2014 at 10:38 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Google Glass, Android Wear, and Apple Watch

I have both the Google Glass and Android Wear (Samsung Gear Live, Moto 360), and often times I wear them together.  People always come up with a question:  “How do you compare Google Glass and Android watches?”.  Let me address couple of the view points here.  I would like to talk about Apple Watch, but since it has not been officially released yet, lets say that shape-wise it is square and looks like a Gear Live, and features seem to be pretty similar to Android Wear, with the exceptions of the attempt to add more playful colors and features.  Lets discuss more about it once it is out.

 

unnamed                             Moto-360-Dynamic-Black

423989-google-glass              10-apple-watch.w529.h352.2x

I am the first batch of the Google Glass Explorer and got my Glass mid 2013.  In the middle of this year, I first got the Gear Live, then later Moto 360.  I always find it peculiar that Glass is an old technology while Wear is a newer technology.  Should it not be easier to design a smart watch first before a glassware?

I do find a lot of similarities between Glass and Wear.  The fundamental similarity is that both are Android devices.  They are voice-input enabled and show you notifications.  You may install additional Android applications for you to personalize your experience and maximize your usage.  I see these as the true values for wearables.

Differences?  Glass does show a lot of capabilities that Android Wear is lack of at the moment.  Things that probably matter for most people would be sound, phone calls, video recording, pictures taking, hands-free with head-on display, GPS, wifi.  Unlike Android Wear, it can be used standalone;  Android Wear is only a companion gadget and has to be paired up with a phone.

Is Glass more superior?   Android Wear does provide a better touch-based interaction, comparing to the swiping at the side of the Glass frame.  You can also play simple games like Flopsy Droid on your watch.  Also commonly included are pedometers and heart activity sensor.  Glass also tends to get over-heated easily.  Water resistance also plays a role here:  you would almost never want to get your Glass wet at all, while Android Wear is water-resistant to certain degree.  When you are charging your watch at night, it also serves as a bedtime clock.

 

php71o7v6

For me, personally, although I own Glass longer than Wear, I have to say I prefer Android Wear over Glass for couple reasons.  First, there is the significant price gap ($1500 vs $200 price tag).  Second, especially when you add prescription to Glass, it gets heavy and hurts the ear when wearing it for an extended period of time.  Third, I do not personally find the additional features offered by Glass useful to my daily activities;  I do not normally take pictures other than at specific moments or while I am traveling.

I also find that even Glass is now publicly available within the US, Glass is still perceived as an anti-social gadget.  The term is defined in the Urban Dictionary as well.  Most of the people I know of who own Glass do not wear it themselves due to all various reasons.  I believe improving the marketing and advertising strategy for Glass may help.

Gadget preference is personal.  What’s yours?Possibly Related Posts: