Deploying MySQL on Oracle Clusterware – Part 5

Installing MySQL

There are a few ways to get the MySQL software set up for clustering.  One option is to install the rpm for MySQL on both cluster nodes.  There is also a tar file containing what you need to create a yum repository that can be used by multiple servers.  You could also use a tar of the software and just extract it to a shared filesystem.  The benefit of the last option is that you can have multiple instances of MySQL hosted on the same cluster running different versions.

I decided to have MySQL 5.7 installed from a yum repository, but also have a MySQL 5.6 home extracted from a tar in case you needed to run an older version concurrently.

At the time of writing, I used the following patches from MOS as they had the latest versions of MySQL 5.7 and MySQL 5.6

  • Patch 25236538: MySQL Database 5.7.17 Yum Repository TAR for Oracle Linux / RHEL 6 x86 (64bit)
  • Patch 24412279: MySQL Enterprise Backup 4.0.3 RPM for Oracle Linux / RHEL 6 x86 (64bit)
  • Patch 25238523: MySQL Database 5.6.35 TAR for Generic Linux (glibc2.5) x86 (64bit)
  • Patch 23250313: MySQL Enterprise Backup 3.12.3 TAR for Generic Linux (glibc2.5) x86 (64bit)

Yum based install

The cluster deployment should have created a filesystem at /u02/app/mysql/product by default.  We are going to use this filesystem to store a yum repository and any other MySQL versions we would like to have.  First, lets extract the yum repository:

# cd /u02/app/mysql/product/
# mkdir yum
# cd yum
# unzip /root/p25236538_570_Linux-x86-64.zip
# tar xf mysql-commercial-5.7.17-1.1.el6.x86_64.repo.tar.gz
# rm -rf mysql-commercial-5.7.17-1.1.el6.x86_64.repo.tar.gz*
# rm -f README.txt

Once this is done, you can create a file named /etc/yum.repos.d/mysql.repo on both nodes containing:

[mysql-5.7]
name=mysql-5.7
baseurl=file:///u02/app/mysql/product/yum/mysql-5.7/
gpgkey=file:///u02/app/mysql/product/yum/RPM-GPG-KEY-mysql
gpgcheck=1
enabled=1

You could also set up a yum repository elsewhere or use spacewalk, but since I only have licensed a limited number of machines, I prefer to restrict access to the installer so it isn’t inadvertently used.  You should now be able to install MySQL on both nodes:

# yum install mysql-commercial-server

The MySQL Enterprise Backup software doesn’t create a yum repository by default.  I just extracted it and installed it (make sure you do the install on both nodes):

# cd /u02/app/mysql/product/
# mkdir meb-rpm
# cd meb-rpm/
# unzip p24412279_400_Linux-x86-64.zip
# yum install meb-4.0.3-el6.x86_64.rpm

tar based install

Getting a tar based MySQL home is easy:

# cd /u02/app/mysql/product
# mkdir server
# cd server
# unzip /root/p25238523_56_Linux-x86-64.zip
# rm -f mysql-advanced-5.6.35-linux-glibc2.5-x86_64.tar.gz* README.txt
# tar xf mysql-advanced-5.6.35-linux-glibc2.5-x86_64.tar.gz
# mv mysql-advanced-5.6.35-linux-glibc2.5-x86_64 5.6.35-advanced

Its also easy to get a MySQL Enterprise Backup home setup:

# cd /u02/app/mysql/product
# mkdir meb
# cd meb
# unzip /root/p23250313_3120_Linux-x86-64.zip
# tar xf meb-3.12.3-linux-glibc2.5-x86-64bit.tar.gz
# rm -f meb-3.12.3-linux-glibc2.5-x86-64bit.tar.gz* README.txt
# mv meb-3.12.3-linux-glibc2.5-x86-64bit 3.12.3

Creating an ACFS filesystem for database storage

Now we will create another ACFS filesystem to store the actual database files.  We are going to use ACFS snapshots as well to make it easy to clone individual databases if you ever want to.  First, we need to create the mount point for the filesystem:

# mkdir /u02/app/mysql/datastore
# chown mysql:oinstall /u02/app/mysql/datastore

Once this is done, we can create the filesystem.  We are going to put this on the DATA diskgroup.  If your DATA diskgroup was created by the cluster deployment, it probably is set up to use older versions of ASM and ACFS that are lacking the features we want.  This can be upgrade as the oracle user with:

$ asmcmd
ASMCMD> setattr -G DATA compatible.asm 12.1.0.2.0
ASMCMD> setattr -G DATA compatible.advm 12.1.0.2.0
ASMCMD> setattr -G DATA compatible.rdbms 12.1.0.2.0
ASMCMD> volcreate -G DATA -s 70G MYSQL_DATA

Once this is done, we can run this as root to create the filesystem and mount it (your device name may be slightly different):

# mkfs.acfs -f /dev/asm/mysql_data-466
# acfsutil registry -a -f /dev/asm/mysql_data-466 /u02/app/mysql/datastore
# mount.acfs -o all

Now that the filesystem is mounted, we can create an ACFS snapshot to store the database in when we create it:

# acfsutil snap create -w mydb1 /u02/app/mysql/datastore

I prefer to have the snapshot name match the short hostname of the virtual IP that will host the database (e.g. the snapshot mydb1 corresponds to the database hosted by mydb1.example.com).

Install the Standalone Agent

We need to install the Standalone Agent for Oracle Clusterware, because the agent built in to Clusterware does not have MySQL support.  You can download it here.  At the time of writing this was version 7.1 with a file name of xagpack_7b.zip.  You need to get a copy of this zip file to one of your nodes, extract it, and then run this as oracle:

$ ./xagsetup.sh --install --directory /u01/xag-standalone --all_nodes

You may need to create the directory you are installing to as root and edit the permissions if you are installing somewhere that the oracle user cannot create directories.

After the install, I’ve found that some additional tweaking needs to be done.  This may not apply if you are running your MySQL instances as the oracle user instead of the mysql user.  I had to do the following on both nodes to get things to work as expected:

# cd /u01/xag-standalone
# chmod -R 755 perl/
# cd log/
# mkdir `hostname -s`
# chown oracle:mysql `hostname -s`
# chmod 775 `hostname -s`

The documentation isn’t really clear if it expects MySQL to run as the oracle user or as the mysql user.  There are examples of creating the MySQL instance to run as the mysql user, but then you will encounter the issues I mentioned above.  The documentation also talks about privileges the oracle account needs, but there are also privileges the mysql user will need.  I’ll cover this in more detail in another part.

In the next part, we will create our MySQL instance and start it up.

Posted in MySQL, Oracle ACFS, Oracle ASM, Oracle Clusterware, Oracle Grid Infrastructure | Comments Off on Deploying MySQL on Oracle Clusterware – Part 5

Installing Qubes OS 3.2 on a HP Zbook Studio G3

I recently got a demo unit of an HP Zbook Studio G3 to play with.  It is a tricked out one with a 4k display and a NVMe drive.  I wanted to try out Qubes OS on it, but the installer would lock up when I tried to boot it.

Getting Qubes to install and boot

The rough steps for getting a HP Zbook Studio G3 to install Qubes OS 3.2 are:

  • Get a copy of the Qubes OS 3.2 ISO
  • Write the ISO to a USB Drive
  • Mount the EFI boot partition of the USB drive
    • Windows will automatically mount it if you plug in the USB drive – a Mac won’t recognize it at all
  • Modify the EFI/BOOT/xen.cfg file so that all of the kernel lines contain acpi_osi=! acpi_osi=”Windows 2009″
    • I found this useful tip on the ArchWiki
  • Disable Secure Boot in the BIOS
  • Boot the USB drive and run the install
  • After the install completes reboot back off the USB drive into “Rescue a Qubes system”
  • Let the recovery environment mount your install
  • Run “setfont sun12x22” so you don’t go blind
  • Modify /mnt/sysimage/boot/efi/EFI/qubes/xen.cfg so that all of the kernel lines contain acpi_osi=! acpi_osi=”Windows 2009″.  The installer tries to but doesn’t do it quite right.
  • Do a “chroot /mnt/sysimage” and run “efibootmgr -v -c -L Qubes -l /EFI/qubes/xen.efi -d /dev/nvme0n1p1” to work around an issue with the installer and NVMe drives
  • Your system should now be bootable and you can finish running through setup

Setting up the screen so you don’t go blind

Once you have the system up and running the default settings aren’t very friendly for a 4k display.  I was able to get XFCE to use a bigger font dpi, but KDE seems to have better 4k display support.  You should be able to follow the documentation, with one minor change.  Make sure you also add “-dpi 192” to the line you edit in /etc/sddm.conf.

Posted in Linux, Qubes | Comments Off on Installing Qubes OS 3.2 on a HP Zbook Studio G3

SSSD and Active Directory Primary Group

If you’re ever scratching your head because you’re seeing messages like this when trying to diagnose a sssd issue with an ad_access_filter for the user foobar:

 [sdap_access_filter_done] (0x0100): User [foobar] was not found with the specified filter. Denying access.

You just know that foobar is a member of one of the groups the ad_access_filter is looking for, so what is going on?   The issue is probably that foobar is a member of the group, but also has that group set as it’s primary group.

The primary group of an account in Active Directory doesn’t appear under the account’s memberOf LDAP attribute.  You’ll have to add the primaryGroupID attribute to your ad_access_filter.

Posted in Linux, SSSD | Comments Off on SSSD and Active Directory Primary Group

Deploying MySQL on Oracle Clusterware – Part 4

If you are not deploying an extended distance cluster, you can skip on to part 5 of this series.

Configure the witness site

At your witness site all you need is a Linux machine running an NFS server.  This could be something as simple as an EC2 instance running in Amazon.  The Oracle whitepaper on this that I referenced is available here.  You need to create a directory and export it to both cluster nodes.  For example you could add these lines to /etc/exports on the nfs server:

/mysql-prod mysql-prod-dc1-1.example.com(rw,sync,all_squash,anonuid=54321,anongid=54322)
/mysql-prod mysql-prod-dc2-1.example.com(rw,sync,all_squash,anonuid=54321,anongid=54322)

Once that is done, you just need to do the following as root on each node:

  • chkconfig netfs on
  • chkconfig rpcbind on
  • service rpcbind start
  • mkdir /voting_disk

Add a line like this to /etc/fstab on each node:

nfshost.example.com:/mysql-prod        /voting_disk    nfs     _netdev,rw,bg,soft,intr,rsize=32768,wsize=32768,tcp,noac,vers=3,timeo=600       0 0

If you have not yet installed the patch for 19373893 you need to mount the NFS share with the hard option instead of soft.  However if you do this and your NFS server goes unreachable (or your EC2 instance just up and disappears as they occasionally do) your entire cluster will crash.  ASM will not let you add a disk image mounted over a soft mounted NFS share by default, this is what the patch allows you to do.

Once the NFS share is mounted we need to create a disk image to be used as a voting disk for ASM.  Run this command as the oracle user/grid owner on one node:

$ dd if=/dev/zero of=/voting_disk/nfs_vote bs=1M count=500

With this out of the way, we can create the CRS diskgroup.  There are lots of ways to do this.  If you want a GUI you could use asmca.  Command line you could use asmcmd or sqlplus.  I have had issues in the past adding the quorum disk with asmca so I chose to go the sqlplus route.

$ sqlplus / as sysasm

SQL*Plus: Release 12.1.0.2.0 Production on Tue Nov 29 16:33:03 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options

SQL> create diskgroup CRS normal redundancy failgroup DC2 disk '/dev/xvdc' name CRS_1234 failgroup DC1 disk '/dev/xvdd' name CRS_5678;

Diskgroup created.

SQL> alter system set asm_diskstring = '/dev/xvd[c-l]','/voting_disk/nfs_vote' scope=both;

System altered.

SQL> alter diskgroup CRS set attribute 'compatible.asm'='12.1.0.2.0';

Diskgroup altered.

SQL> alter diskgroup CRS add quorum failgroup AWS DISK '/voting_disk/nfs_vote' NAME CRS_NFS;

Diskgroup altered.

You may also need to do a “alter diskgroup CRS mount;” on the other node in the cluster depending on how you created the diskgroup.

Next, we need to tell the cluster to use the new diskgroup as the voting diskgroup instead of the DATA diskgroup created during provisioning.  We also are going to relocate the cluster registry to the CRS diskgroup.

# /u01/app/12.1.0/grid/bin/crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   af5b1804b7ef4f32bfb026d0a2862a4b (/dev/xvde) [DATA]
 2. ONLINE   b177453910564f80bf010ab3c8707d91 (/dev/xvdf) [DATA]
 3. ONLINE   5c82a722c1464ff0bf980648479a1837 (/dev/xvdg) [DATA]
Located 3 voting disk(s).
# /u01/app/12.1.0/grid/bin/crsctl replace votedisk +CRS
Successful addition of voting disk ec3c77f781624f37bf9aafd63411f447.
Successful addition of voting disk 63ec90a312a84f1bbffa93195a294a3d.
Successful addition of voting disk 6a5be824b8714ff0bfe2310d6e80b073.
Successful deletion of voting disk af5b1804b7ef4f32bfb026d0a2862a4b.
Successful deletion of voting disk b177453910564f80bf010ab3c8707d91.
Successful deletion of voting disk 5c82a722c1464ff0bf980648479a1837.
Successfully replaced voting disk group with +CRS.
CRS-4266: Voting file(s) successfully replaced
# /u01/app/12.1.0/grid/bin/crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   ec3c77f781624f37bf9aafd63411f447 (/dev/xvdc) [CRS]
 2. ONLINE   63ec90a312a84f1bbffa93195a294a3d (/dev/xvdd) [CRS]
 3. ONLINE   6a5be824b8714ff0bfe2310d6e80b073 (/voting_disk/nfs_vote) [CRS]
Located 3 voting disk(s).
# /u01/app/12.1.0/grid/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
Version                  :          4
Total space (kbytes)     :     409568
Used space (kbytes)      :       1508
Available space (kbytes) :     408060
ID                       : 2146637587
Device/File Name         :      +DATA
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

# /u01/app/12.1.0/grid/bin/ocrconfig -add +CRS

# /u01/app/12.1.0/grid/bin/ocrconfig -delete +DATA

# /u01/app/12.1.0/grid/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
Version                  :          4
Total space (kbytes)     :     409568
Used space (kbytes)      :       1508
Available space (kbytes) :     408060
ID                       : 2146637587
Device/File Name         :       +CRS
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

Fix the DATA diskgroup

As it stands now, the DATA diskgroup will have the 4 disks it contains set to be in 4 different failgroups.  We need to fix it so disks presented from the same array are in the same failgroup.  I did this with SQLPlus it seems to work best if you first delete a single disk from each eventual failgroup.  Once that is done I added the disks back into the properly named failgroup.

SQL> alter diskgroup DATA drop disk DATA_0000 rebalance power 11 wait;

Diskgroup altered.

SQL> alter diskgroup DATA drop disk DATA_0002 rebalance power 11 wait;

Diskgroup altered.

SQL> alter diskgroup DATA add failgroup DC2 disk '/dev/xvde' name DATA_1234 failgroup DC1  disk '/dev/xvdg' name DATA_5678 rebalance power 11 wait;

Diskgroup altered.

Now, you can drop the other two disks and add them back in the proper failgroup.

SQL> alter diskgroup DATA drop disk DATA_0001 drop disk DATA_0003 rebalance power 11 wait;

Diskgroup altered.

SQL> alter diskgroup DATA add failgroup DC2 disk '/dev/xvdf' name DATA_1235 failgroup DC1 disk '/dev/xvdh' name DATA_5679 rebalance power 11 wait;

Diskgroup altered.

Create the NFS diskgroup

I ended up creating the NFS diskgroup in SQLPlus as well in one shot.  If you aren’t going to use HA NFS, you can skip this.

SQL> create diskgroup NFS normal redundancy failgroup DC2 disk '/dev/xvdi' name NFS_1234 failgroup DC2 disk '/dev/xvdj' name NFS_1235 failgroup DC1 disk '/dev/xvdk' name NFS_5678 failgroup DC1 disk '/dev/xvdl' name NFS_5679;

In the next part, we will get the MySQL software installed and create ACFS filesystems.

Posted in MySQL, Oracle ACFS, Oracle ASM, Oracle Clusterware, Oracle Grid Infrastructure | Comments Off on Deploying MySQL on Oracle Clusterware – Part 4

Deploying MySQL on Oracle Clusterware – Part 3

Deploy the regular cluster

You will be much better off following the instructions for deploycluster for deployment than following the directions farther down.  They will still work, but are quite a bit more manual.

I have posted an example netconf.ini and params.ini that you could use with a little bit of tweaking.  The deploycluster zip file has some sample params.ini and netconfig.ini you can look over as well.  You would probably want to reduce the number of disks used in params.ini and change the diskgroup redundancy as well.

One thing to note is that you need to have a DNS entry that resolves to at least one IP for whatever you specify as SCANNAME in netconf.ini.  We won’t be using SCAN as we aren’t deploying an Oracle Database, but as far as I know there is no way to remove it/shut it off.

You will want to skip part 4 of this series because it documents changes only necessary for an extended distance cluster.

Deploy an extended distance cluster

Send VM Messages

Now that we have our virtual machines provisioned, we can power them on.  Normally, one could use the deploycluster tool from Oracle to configure the virtual machines, but since there are multiple Oracle VM Managers involved we have to do it manually.  This manual deployment was probably the hardest part to figure out – its not very well documented.  This blog entry from Bjorn Naessens was my best resource.  I recommend you still download the deploycluster tool as it has better examples/documentation of the answer files I am using here.

After power up you are greeted with this:

first_boot

You could manually type a bunch of stuff into the console of each machine, but it is much easier to use Oracle VM Messages.  Select the VM in Manager and press the “Send VM Messages” button.

To do a message based/manual deployment you need to send the following messages to the VM:

  • com.oracle.racovm.netconfig.arguments == “-n#” (where # is the node number for the VM matching in netconfig.ini – don’t send the quotes)
  • com.oracle.racovm.netconfig.contents.0 == The contents of the netconfig.ini file
  • com.oracle.racovm.params.contents.0 == The contents of the params.ini file
  • com.oracle.racovm.racowner-password == password for the oracle user
  • com.oracle.racovm.gridowner-password == password for the grid user (only needed if doing privilege separation)
  • com.oracle.racovm.netconfig.interview-on-console == “NO”
  • com.oracle.linux.root-password == password for the root user   (should be the last message sent)

Optionally you could use these messages as well:

  •  com.oracle.linux.datetime.timezone == “America/Detroit”
  •  com.oracle.linux.datetime.ntp-local-time-source = “False”
  •  com.oracle.linux.datetime.ntp-servers == “ntp.example.com,ntp2.example.com”

Here is an example of the messages I sent to one of the nodes:

vm_messages

Build the cluster

The netconf.ini and params.ini will need to be tweaked for your environment.  One thing to note is that you need to have a DNS entry that resolves to at least one IP for whatever you specify as SCANNAME in netconf.ini.  We won’t be using SCAN as we aren’t deploying an Oracle Database, but as far as I know there is no way to remove it/shut it off.

Once both nodes are up and at a login screen you can log into one of them and start provisioning the cluster.  Just run buildcluster.sh and provide the requested passwords and off it goes.buildcluster

If all goes well, you should have a cluster in around 25 minutes.buildcluster_complete

Troubleshooting

If you see an error about asmca failing, it is probably because there appears to be a bug with password prompt.  If RACPASSWORD is commented out in params.ini, then buildcluster.sh is supposed to prompt you for the password.  It does prompt you, but it seems to fail like this with a Grid Infrastructure only deployment.  The work around is to set the value of RACPASSWORD on the node you are running buildcluster.sh from.  Once the cluster is up, you can delete or comment out that line.  You could also put that line in params.ini that you send as a message to the VMs, but I try to avoid plain text passwords personally.asmca_error

 

Posted in MySQL, Oracle ACFS, Oracle ASM, Oracle Clusterware, Oracle Grid Infrastructure, Oracle VM | Comments Off on Deploying MySQL on Oracle Clusterware – Part 3

Deploying MySQL on Oracle Clusterware – Part 2

Create the LUNs for the cluster

So before we deploy the cluster we need to create the shared disks that the cluster will use. The layout that you want to use will depend on the recommendations from your storage vendor.  The layout will also be affected by whether or not you are doing an extended distance cluster.  I am setting this up on Dell Compellent storage, so I followed the documentation in the Dell Storage Center and Oracle12c Best Practices and Dell Compellent – Oracle Extended Distance Clusters.

About the storage I used

Compellent systems are Active/Passive so for best performance and easy manageability we want 1 LUN from each controller going to a disk group.  Each Compellent controller has a unique serial number so I decided to use them in the name of the disk to help with identification.  For example, lets say serial numbers 1234 and 1235 controller pairs in one system and 5678 and 5679 are pairs in another.  Another notable thing about this storage is that Dell recommends to not use partitions on the ASM disks.  That way you can easily grow them in the future and not need to add additional LUNs for growth.

LUNs for Extended Distance Cluster

  • CRS Diskgroup  (used for voting files and cluster registry)
    • mysql-prod CRS-1234    –    5 GB Tier 1 Storage
    • mysql-prod CRS-5678    –    5 GB Tier 1 Storage
  • DATA Diskgroup (used to store MySQL databases)
    • mysql-prod DATA-1234    –    100 GB Tier 1 Storage
    • mysql-prod DATA-1235    –    100 GB Tier 1 Storage
    • mysql-prod DATA-5678    –    100 GB Tier 1 Storage
    • mysql-prod DATA-5679    –    100 GB Tier 1 Storage
  • NFS Diskgroup (used to store HA NFS shares)
    • mysql-prod NFS-1234    –    100 GB All Tiers
    • mysql-prod NFS-1235    –    100 GB All Tiers
    • mysql-prod NFS-5678    –    100 GB All Tiers
    • mysql-prod NFS-5679    –    100 GB All Tiers

We will let ASM handle the mirroring of disks between the two sites.  We will also need to make sure that volumes from 1234 and 1235 are in the same failgroup and volumes from 5678 and 5679 are in another failgroup.  We are essentially striping across the volumes for performance in the same failgroup and mirroring between failgroups.

We need a separate diskgroup called CRS because of Doc ID 1992968.1/Bug 20473959.  If we used the DATA diskgroup for the voting disks, adding the NFS quorum disk would make it so we couldn’t use ACFS on that diskgroup.  Without ACFS, there wouldn’t be a filesystem to store our MySQL databases on.

The LUNs mentioned above need to be exported to BOTH Oracle VM clusters.  You will also need a LUN in each datacenter presented only to the local site to host the Repository for the VMs as well.

The NFS diskgroup isn’t a necessity.  It is there so that you can also create HA NFS file shares hosted by the cluster.  If you don’t have a need for that, then feel free to skip it.

LUNs for a regular cluster

  • DATA Diskgroup (used to store MySQL databases)
    • mysql-prod DATA-1234    –    100 GB Tier 1 Storage
    • mysql-prod DATA-1235    –    100 GB Tier 1 Storage
  • NFS Diskgroup (used to store HA NFS shares)
    • mysql-prod NFS-1234    –    100 GB All Tiers
    • mysql-prod NFS-1235    –    100 GB All Tiers

Since we are just doing a regular cluster in this case, we don’t need a CRS diskgroup as we do above.  You also would want to use external redundancy for the disk groups as the storage array is already providing redundancy.  We want to have a LUN on each controller to stripe across for better performance.  The voting disk will just be one of the disks in the DATA diskgroup.

Again, the NFS diskgroup isn’t a necessity.  It is there so that you can also create HA NFS file shares hosted by the cluster.  If you don’t have a need for that, then feel free to skip it.

Create the virtual machines

We need to clone the template we want into a virtual machine running in each datacenter. For this example, lets call them mysql-prod-dc1-1 and mysql-prod-dc2-1 (where dc1 and dc2 are unique identifiers for the datacenter they are in).  If you are not making an extended distance cluster, then maybe mysql-prod-1 and mysql-prod-2 are better names.  Once that is done we need to edit the virtual machines.  We need to set the first network interface to be on the public network that we want to use for the cluster.  The second network interfaces needs to be on the private network we are using for cluster heartbeats.   

network-mappings

We also need to assign the ASM LUNs as physical disks to both virtual machines.  They need to be assigned in the same order on both.  Obviously, if you are not doing an extended distance cluster you will not have as many LUNs to present to the virtual machines.  The important thing is that the shared LUNs are in the exact same slots on both machines.

disk-mappings

Posted in Dell Storage, MySQL, Oracle ACFS, Oracle ASM, Oracle Clusterware, Oracle Grid Infrastructure, Oracle VM | Comments Off on Deploying MySQL on Oracle Clusterware – Part 2

Deploying MySQL on Oracle Clusterware – Part 1

Introduction

One of the cool things about working in higher education is that in certain cases there is infrastructure in place that you could only dream of in other industries. In this case, I’m referring to having multiple datacenters on campus with lots of single mode fiber run between them. This makes it easy to have a stretched SAN fabric as well as private networks for cluster heartbeats.

Extended Distance Cluster

With SAN storage available in both datacenters, it makes sense to deploy an extended distance cluster for mission critical systems. Oracle VM doesn’t support an extended distance cluster per Doc ID 1602029.1. That said, Oracle RAC does support this scenario. There is documentation here for setting up an Oracle RAC extended distance cluster.

But what if I wanted to have a setup like this for other applications? If you have Oracle Hardware under Premier Support or certain Oracle Linux subscriptions, then you have the right to use Oracle Clusterware to protect applications running on Oracle Linux.  You also have the right to use Oracle ACFS.  Additionally, if you have a MySQL Enterprise subscription, then you have the right to use (see page 49) the MySQL Agent for Oracle Clusterware.  One thing to note is that together Oracle ASM and Oracle Clusterware are called Oracle Grid Infrastructure.  This is the software that makes Oracle RAC possible, so it should be possible to cluster any software in a similar manner.  However, I am not a lawyer so you should confirm that you do have the right to use any of the software I have mentioned.

If you follow the instructions for a Extended Distance Cluster I am assuming:

  • You have two separate Oracle VM clusters in two separate datacenters.
  • You have a SAN in both datacenters and can present LUNs from them to both Oracle VM clusters.
  • You have networking in place to have a private cluster network between the datacenters.
  • You have a third site to use as a witness – could just be AWS.

“Regular” Cluster

You could also deploy a “regular” HA  cluster with some slight modifications to these instructions.  I’ll try to point out the differences along the way.  The process will actually be much simpler if you are not trying to deploy and extended distance cluster.

Getting Started

The easiest way I have found to get started is to use the Oracle VM Templates for Oracle Database. During deployment, you can just deploy an empty Grid Infrastructure cluster instead of a full RAC cluster. You can just delete the Oracle Database home after deployment.  You don’t have to install or configure the Grid Infrastructure software yourself – you just fill in an answer file.

You can deploy the template as is, or you can tweak the template before deployment.  The template lags behind a little from the current quarterly updates.  You can install patches yourself into a new template,on a deployed VM before building the cluster, or after deploying the cluster.  See the FAQ of the template notes for details on that process.  For our purposes, you’ll want to at least install patch 19373893 at some point.  This makes it so you can soft mount your NFS share from the witness site – more on that later.  You can even mix and match disks between template versions.  I took the 12.1.0.2.160719 template and used just the software disk.  I used the system disk from the 11.2.0.4.160719 template because I wanted Oracle Linux 6 instead of 7.

 

 

Posted in MySQL, Oracle ACFS, Oracle ASM, Oracle Clusterware, Oracle Grid Infrastructure, Oracle VM | Comments Off on Deploying MySQL on Oracle Clusterware – Part 1

Friendly Physical Disk Names in Oracle VM Manager with Dell SC Storage

Are you using Dell SC Series (Compellent) Storage with Oracle VM?  If you are, then you are probably sick of seeing physical disks named like this:

compelnt

I have written a script that will pull volume names from Dell Storage Manager (formerly Dell/Compellent Enterprise Manager) and set the physical disk name to match in Oracle VM Manager.  I have tested this will Dell Enterprise Manager 2015 R2 and Dell Storage Manager 2016 R1.  I have also tested this with Oracle VM Manager 3.3.3 and 3.3.4.  I suspect it will work with Oracle VM Manager 3.4.x but I have not tested it yet.

Essentially, the script gets a list of volumes from Dell Storage Manager and a list physical disks from Oracle VM Manager.  It matches up the Page 83 information from both and sets the physical disk name in Oracle VM Manager to match the volume name.  The code is available here.

You can run it manually from anywhere you have Python installed, or you could add a cron job to run it from your Oracle VM Manager host. Once ran, if you log into Oracle VM Manager you should have much better disk names:

good-names

The image has had parts of the names redacted, but you get the idea.

Posted in Dell Storage, Oracle VM | 1 Comment