Thursday, July 13, 2017

Role of # in SQL*Plus

The # character is for commenting in SQL*Plus, right?

The character # has been mostly used for comments in many languages, such as shell scripts and python. Interestingly # is legal syntax in SQL scripting as well; but is it considered a comment? The answer is no; it's not. The purpose of # in SQL scripts is very different. and you should be very careful using it.

Entering # tells SQL*Plus to temporarily pauses what has been entered before and execute everything after that #sign, as if in a different session. Here is a usecase. Suppose you are writing this query:

SQL> select *
  2  from v$sesstat
  3  where

[Typographical Conventions: User inputs are shown bold. System outputs are not.]

At this time you are stuck. You don't remember the column names of V$SESSTAT. But you need to know that to use it as a predicate. What can you do? You could just press ENTER at this point, which causes the SQL*Plus prompt to be redisplayed. You can describe the view and that will be your answer. However, this means you will have lost all the input you have entered until then. In case of a complicated long query it is hardly desirable.

This is where the # symbol comes to help. Just type # and enter the command you want to be displayed. In this case you want to describe the view. So enter at the 4th line prompt of SQL*Plus:

  4  #desc v$sesstat

When you enter this command and press ENTER, SQL*Plus will halt the evaluation of all the command it is doing now and execute the desc v$sesstat command. Here will be the output:

 Name                                      Null?    Type
 ----------------------------------------- -------- ----------
 SID                                                NUMBER
 STATISTIC#                                         NUMBER
 VALUE                                              NUMBER
 CON_ID                                             NUMBER

After this display, the SQL*Plus prompt will show the 4th line as the prompt "4". Now that you know the column, enter the rest of the SQL statement you were entering

  4  con_id = 1 
  5  /

And that's it. Note how SQL*Plus didn't discard any of the commands already entered. Instead it merely paused the evaluation and started evaluating the command "desc v$sesstat" after the # symbol. This symbol "#" is not a comment character but a temporary new command without discarding the previous commands in play in the session.

Unintended Ramifications

This behavior of # character may result in some scenarios not at all intended. Take for instance the following scenario. We create a table, insert a row, do not commit and then check the number of records.

SQL> create table t (col1 number);

Table created.

SQL> insert into t values (1);

1 row created.

SQL> select count(1)
  2  from t
  3  where 1 > 0
  4  /

  COUNT(1)
----------
         1

We got the results as expected. Now let's introduce a small line #roll in between the commands. Perhaps someone assumed it was a comment and left it there.

SQL> select count(1)
  2  #roll
Rollback complete.
  2  from t
  3  where 1 > 0
  4  /

  COUNT(1)
----------
         0

Note how the counts came back as 0. Why? It's because the rollback occurred independently of the select command. By the time the select statement was completed, the rollback statement had rolled back the insert and hence there were no rows in the table.

In some cases this behavior might prove deadly. Here is another example I have seen, from a script that shuts down the database, takes a storage snapshot and restarts it.

shutdown abort
startup

Later it was decided that shutdown abort is not desirable. It should be shutdown transactional. So, the DBA changed the script to the following. Assuming the # character to be comment, she thought she was commenting the line instead of removing it:

#shutdown abort
shutdown transactional
startup

Now let's run the script:

SQL> @b
ORACLE instance shut down.
ERROR:
ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
Process ID: 0
Session ID: 0 Serial number: 0
ORACLE instance started.

Total System Global Area 5117050880 bytes
Fixed Size                  8757424 bytes
Variable Size            1056968528 bytes
Database Buffers         4043309056 bytes
Redo Buffers                8015872 bytes
Database mounted.
Database opened.                                                                                                                    

What happened? The shutdown failed with "ORACLE not available". Why?

The answer is simple. The shutdown abort executed. The database was already down. Therefore the shutdown transactional failed.

Don't Like #?

Now that you see how some people may confuse # as a comment and make mistakes, you may not like to have this special property assigned to #. Instead, you may want another character, say "*". You can easily change it. The special setting in SQL*Plus called SQLPREFIX allows you to do that:

SQL> set sqlprefix *

Now, let's repeat the same example we used earlier but with * instead of #:

SQL> select count(1)
  2  * desc t
 Name                                      Null?    Type
 ----------------------------------------- -------- --------
 COL1                                               NUMBER

  2  from t;

  COUNT(1)
----------
         0

What if you use # character in this case?

SQL> select count(1)
  2  # desc t
  3  from t;
# desc t
*
ERROR at line 2:
ORA-00911: invalid character

You will not go anywhere. You will get an error. It's probably better than an assumption of # character as a comment.

Takeaways

  1. The # character is not a comment in SQL scripts.
  2. When SQL*Plus encounters a # character at the first position, it temporarily pauses the evaluation of the statements being entered and immediately executes the command after # and resumes evaluation of the previously entered commands afterwards.
  3. You can assign another character to exhibit this behavior by setting the SQLPREFIX to that character.

Monday, July 10, 2017

Enough with AFIEDT.BUF

You are in SQL*Plus. You entered a command and urgh, there was a typo. No worries, you bring up the command in an editor by typing:

SQL> ed

This opens up an editor, such as notepad.exe in Windows or vi in Unix, etc. And it puts the last SQL you entered in a file, oddly named, afiedt.buf. You don't like it and you want a name easier to type. Is it possible? Of course.

History of Afiedt.Buf

First a little bit of background information on the odd name. SQL*Plus as a tool evolved from another tool Oracle provided a long, long time ago, called--rather uncreatively--User Friendly Interface, or UFI. When the editor wanted to bring up a file for editing, the file had to be given a name unique enough so as not to conflict with anything else. Therefore the file was named ufiedt.buf, which roughly indicated UFI Editor Buffer.

Later as UFI was added new features, it was called Advanced UFI (well, "advanced" is relative). To keep pace, the generated file was called Advanced UFI Editor Buffer, or aufiedt.buf. However an operating system that Oracle supported couldn't handle a 7 character file name before the period before suffix. Therefore the name was shortened to afiedt.buf to keep it distinct from ufiedt.buf (which presumably was not as "advanced"). As the advancements in technology came, Oracle didn't bother to change it and the name stuck. Long live afiedt.buf!

Why Change It?

If ain't broken, don't fix it. Shouldn't that be the case? The name doesn't break anything; so why change it? There are three reasons to at least consider.
  1. The name may difficult to type.
  2. he extension of ".buf" is not automatically recognized by SQL*Plus as a SQL script.
  3. The file is created in the present directory. Suppose you are in a directory you don't have permission on, e.g. "/etc" in Unix or \Windows in Windows, you can't just bring up the editor as you won't have the permissions to create the file there. You will get this error:
SP2-0110: Cannot create save file "afiedt.buf"
  • What do you do? Exit from SQL*Plus, change to a proper directory and re-execute this command? Then you would lose all the settings you may have done in that session. So you would want to create the file in a location of your choosing.
So, while you mean no disrespect to Oracle's legacy, you would want to relegate afiedt.buf to where it belongs--to the footnotes of history.

How can I Change It?

It's ridiculously simple to change the file. A setting in SQL*Plus controls it.

SQL> set editfile arup.sql

After this when you type the editor command:

SQL> ed
Wrote file arup.sql


Look how the file created was named "arup.sql"; not afiedt.buf. Of course you can use any name as you find appropriate.

Changing Directory


Let's handle the second problem. You are in a directory where you don't have write permission and you want to edit the command. Well, you can use a full path to the file as well.

SQL> set editfile c:\temp\arup.sql
SQL> ed
Wrote file c:\temp\arup.sql


The file is created in the C:\temp directory instead of the default option of the current directory.

Changing the Editor

While on the subject, do you know that you can change the default editor as well? For instance, if you are on Windows, you don't have to have Notepad as the editor for SQL scripts. It can be notepad++, or, as I prefer, "vim", which is enhanced vi. In Unix, the default is not vi; it's "ed" and you may want to change it. How to do that?

Very simple; another setting _EDITOR allows us to do that. It's a variable; so you have to use DEFINE command. Use the following:

SQL> define _editor = vi

That's it; now when you enter "ed" command, it will bring up vi editor. Change to whatever you want to set.

Tips for Practical Uses

If you want to change the default editor and the default file for editing, put the commands in the glogin.sql file, which ensures their automatic execution when someone invokes SQL*Plus.

Afiedt.Buf is dead. Long live Afedit.Buf. Or, was it Afiedt.Buf? Oh, who cares?


Friday, July 07, 2017

Setting APPINFO in SQL*Plus

Ever used the MODULE column of V$SESSION view? If you haven't, you are missing out on a very important piece of instruemntation code built right into the Oracle database session management. This allows you to assign a completely arbitary name, as you would see will properly describe for your application. Later when you want to identify that session, you can check the MODULE column in V$SESSION. Even though the userid is the same for all these sessions; the module column will help you identify the session.

Let's see how it works by a little example. Here is how you check the module information.

select module
from v$session
where username = 'SYS'

MODULE
-----------------------------
sqlplus.exe
sqlplus.exe

In both cases the MODULE column shows the same value. You can set he module to a more descriptive name by issuing this command:

SQL> exec dbms_application_info.set_module('MyModule','MyAction')

PL/SQL procedure successfully completed.

Now if you check the module:

MODULE
---------------
MyModule
sqlplus.exe

As you can see the session where executed the packaged procedure has the descriptive module name. If you want to enable tracing, check the sessions stats, debug what's going on, etc., the ability to identify the session uniquely and accurately could be lifesaver.

But we had to execute the package dbms_application_info. The principles of least privileges says that we don't want to grant more privileges than absolutely necessary. While no one will argue against this being convenient, it will probably be hard to justify as "absolutely" necessary. So, what could you do to identify the session via the module? The generic module name "sqlplus.exe" is pretty much useless.

There is a much simpler solution. The SQL*Plus parameter appinfo comes to rescue. By default the setting is off. You can confirm that by issuing the following:

SQL> show appinfo
appinfo is OFF and set to "SQL*Plus"

To set to a value you want, use the follwoing:

SQL> set appinfo MyApp

Now if you check the sessions from another sessions:

SQL> select module
2 from v$session
3 where username = 'SYS';

MODULE
-------------------------------
MyApp
sqlplus.exe

See how the module is set to MyApp, which allows you to locate the session from many from the same user. And you could do that without calling the package.

What are the practical implications? The best usecase, I think is using this in the automatic login scripts such as glogin.sql. Put the following line in gloin.sql in $ORACLE_HOME/sqlplus/admin directory.

set appinfo "AcmeModule"

Now any session using that Oracle Home will have the Module column set to AcmeModule.

However you can also create local files called login.sql in individual directories and set appinfo apprpriately. For instance, suppose you have three major directories where you generally run the SQL scripts from. Create login.sql files on each directory. Directory "C:\App1" has a login.sql file with the following content:

set appinfo "App1"

Create login.sql files in the other directories with the appropriate contents. Now when someone connects to the database from SQL*Plus from those directories, the module will be set appropriately.
You will be able to know what directory the user has been in calling the scripts.

A caveat on the above, though. In Oracle 12.2, login.sql file is not executed, if a glogin.sql file is present. It is silently ignored. To make sure the local login.sql file is read, you have to explcitly set the variable ORACLE_PATH or SQLPATH to that directory.

Hope you find use of this little known setting in SQL*Plus.

Tuesday, June 13, 2017

E4 Session Developing Agnostic Data Services

Thank you for all attending my virtual session Developing Location and Technology Agnostic Data Services at Accenture Enkitec's E4 2017 Conference.

Here is the presentation deck for your reference. http://www.proligence.com/pres/e4_2017/e4_17_data.pdf

As always, I will appreciate your feedback--the good, the bad and the ugly.

NYOUG Session How Oracle Buffer Cache Works

Thank you all for coming to my session How Oracle Buffer Cache Works at New York Oracle User Group Summer General Meeting in New York City.

You can download the session and scripts used for the demo here.

Presentation : http://www.proligence.com/pres/nyoug17/nyoug17_buffercache.pdf
Scripts : http://www.proligence.com/pres/nyoug17/nyoug17_buffercache_scripts.zip

As I mentioned during my talk, you may find these blogposts helpful to understand this more:

How Oracle Locking Works http://arup.blogspot.com/2011/01/how-oracle-locking-works.html
Cache Buffer Chains Demystified http://arup.blogspot.com/2014/11/cache-buffer-chains-demystified.html
100 Things You Didn't Know About Oracle http://arup.blogspot.com/2010/12/100-things-you-probably-didnt-know.html
More on Buffer Clones http://arup.blogspot.com/2011/04/can-i-fit-80mb-database-completely-in.html

Wednesday, May 17, 2017

Pesentations at Great Lakes Oracle Conference 2017

For those who came to my sessions at GLOC 2017 in Cleveland, thank you for giving me your time. I hope you found them useful. As always, I love to hear your feedback. You can add a comment here, email me at arup@proligence.com, tweet at @ArupNanda or at www.Facebook.com/ArupKNanda.

All about Deadlocks The Presentation The Tracefile Used as an Example
How Buffer Cache Works The Presentation The Scripts
Big Data for Oracle DBAs The Presentation

Again, appreciate your coming to my session and thanks in advance for your feedback.

Friday, April 28, 2017

Passing Single Quotes in DBMS Assert Package

Today, while describing the usefulness of DBMS_ASSERT package to prevent SQL and PL/SQL Injection attacks someone asked me how to pass a string with single quotes successfully to this package.

First, if you don't know what DBMS_Assert is or why you should know about it, check out the presentation on this blogpost. In summary, the ENQUOTE_LITERAL() function strips off all the single quotes from around the string and replace with just a pair of single quotes, which makes it a clean, uninjected string. Here is an example of a string called Joe Pizza.

SQL> select dbms_assert.enquote_literal('Joe Pizza')  from dual;

DBMS_ASSERT.ENQUOTE_LITERAL('JOEPIZZA')
---------------------------------------
'Joe Pizza'


As you can see, the string is presented back with a pair of single quotes. Now let's see what happens if we put another pair of single quotes. To be syntactically correct, we will need to escape the single quote with another single quote.

SQL> select dbms_assert.enquote_literal('''Joe Pizza''')  from dual;

DBMS_ASSERT.ENQUOTE_LITERAL('''JOEPIZZA''')
-------------------------------------------
'Joe Pizza'


What happened? Well, the Assert package stripped off all extraneous single quotes and replaced them with just one pair of single quotes.

But what happens when we need to put a single single quote as a legitimate character, e.g. Joe's Pizza? This is where the escape sequence in SQL comes in. You can write this as follows:

SQL> select dbms_assert.enquote_literal(q'[Joe''s Pizza]')  from dual;

DBMS_ASSERT.ENQUOTE_LITERAL(Q'[JOE''SPIZZA]')
---------------------------------------------
'Joe''s Pizza'


That's it; "Joe's Pizza" is now perfectly passed.

Wait a minute. It's not Joe's Pizza; it's Joe''s Pizza. There are two single quotes; not one. That's not what we intended, did we? So it's wrong, right?

No; it's perfectly fine. You see, when you pass "Joe's Pizza" inside single quotes, you must escape the single apostrophe inside. The escape character is the single quote. Since the Assert package puts single quotes around the string, the single quote inside the string must be escaped to be syntactically correct. So, it makes sense to have two single quotes; not just one.

Joe's Pizza--if there is something in real world in that name--should probably pay me for the publicity.


Thursday, April 27, 2017

Preventing SQL and PL/SQL Attacks at New York Meetup

Thank you all for attending my session at New York Meetup and New York Oracle User Group Spring Meeting in New York City on April 27th. I am truly honored by your presence, and especially for the questions.

You can download the presentation here And the scripts I used for the demos here. As always, I will appreciate your feedback either via comments here, or on social media or emails.

Twitter @ArupNanda
Facebook.com/ArupKNanda

Friday, January 13, 2017

Python for PL/SQL Developers Series

How hard it is to learn a new language? It depends on the learner, of course; but everyone will agree that it is easier to get your feet wet if you learn the corresponding structures of a language you are already proficient in. That's the principle behind my new article series on Oracle Technology Network: Python for PL/SQL Developers. If you are at least an intermediate level in PL/SQL, you will jumpstart your learning Python by learning the corresponding syntax elements, which is almost always a guaranteed way to learn the meat of the language fairly quickly. I learned most of the languages that way, and this is an experiment to do the same here as well.




Designed as a five part series, it starts off with the basics such as variables and assignments. then it gradually progresses to conditions, loops, functions, modules, etc. Finally, it explains how to use publicly available packages for data manipulation, including the ones to connect to the Oracle database. The idea is to have you up and running in Python by committing about an hour every day for a week.




Too busy to read? I have developed a video for each part. Watch it and go at it. Each part also has a quiz to test your knowledge gained from that part.




Here it is: http://bit.ly/python4plsql




As always, I will be successful only if you (and only you) like it. I will appreciate your honest feedback on this series. How you liked it (or, not); what are the strengths; what areas could have been improved and so on. Post as comments here, tweet @ArupNanda, post at www.FaceBook.com/ArupKNanda or just email me at arup@proligence.com.

Wednesday, November 23, 2016

A Million Thanks to _You_

Tomorrow is Thanksgiving holiday in the United States. Millions of Americans will travel this weekend to be with family, away from the family, visit somewhere fun, or for some unfortunate few, be stuck at work or stuck in the traffic. Amid loads of turkey, stuffing and the oft-present alcohol, it's easy to forge the concept behind this holiday: it's about giving thanks. It's a day to remember, acknowledge and celebrate all those who made us who we are today, and offer our thanks.


I take this moment to show my immense gratitude to all those who have inspired and encouraged me to speak, write and teach. I have always been a mentor and loved to teach and write. I first presented in public exactly 14 years ago in Richmond, VA. My first published article came out later that year in New York Oracle User Group publication TechJournal. And my first book was published a year later. Since then I have had the opportunity to speak at 422 sessions in 24 counties, deliver 35 training seminars, write 517 published articles and 6 books. And, I could do that just for the support and inspiration provided by the readers and attendees who I had the pleasure of delivering the sessions and writings.


Here is a map of the places I presented. It's not complete; I put what I could remember. https://drive.google.com/open?id=1EenMw3RKyKc2AxUMuNCj81lTywo&usp=sharing


I am truly honored to have you read my work or listen to me. For this I am thankful to you, dear reader, attendee or the visitor to my blog.


Happy Thanksgiving!

Friday, November 11, 2016

Sangam16 Session: Resolving Latch Contention

Many thanks to those who came to my session at Sangam, the annual conference of All India Oracle Users Group. You can download the presentation materials here:


Presentation here
Scripts here


As always, I will greatly appreciate to hear your thoughts on the session.. Please reach out to me via a comment on this post, or on the following:


Email: arup@proligence.com
Twitter: @ArupNanda
Facebook: ArupKNanda
Google+: +ArupNanda

OTN Ace Directors APAC Tour at Gold Coast


Thank you all for coming to my sessions at the Glod Coast conference of Oracle Technology Network +Oracle Community Network ACE Directors Tour in APAC. Please feel free to download the materials presented.






1. Secure your database in 1 day Presentation and Scripts
2. Prevent Bad SQLs in the Cloud Presentation
3. Becoming a Multitenant DBA Presentation






As always, I will appreciate to hear from you how you "really" felt about them--the good , the bad and the ugly. You can post a comment on the blog right here, or reach me via:






Email: arup@proligence.com
Twitter: @ArupNanda
Google Plus: +Arup Nanda
Facebook.com/ArupKNanda

Sangam16 Session: Preventing SQL and PL/SQL Injection Attacks

Thank you all those who came to my session at Sangam 16, the conference of All India Oracle User Group, where I presented as a part of the OTN ACE Directors APAC Tour.


You can download the materials here. Slides and Scripts used in the demo


As always, I would highly appreciate your feedback, be it here as a comment, or via social media or email.


Email: arup@proligence.com
Twitter: @ArupNanda
Facebook.com/ArupKNanda
Google+: +ArupNanda

Monday, October 31, 2016

OTN ACE Director APAC Tour in Sydney Sessions

Thank you all for coming to my sessions at the Sydney conference of Oracle Technology Network +Oracle Community Network ACE Directors Tour in APAC. Please feel free to download the materials presented.




1. Secure your database in 1 day Presentation and Scripts
2. Prevent Bad SQLs in the Cloud Presentation
3. Becoming a Multitenant DBA Presentation




As always, I will appreciate to hear from you how you "really" felt about them--the good , the bad and the ugly. You can post a comment on the blog right here, or reach me via:




Email: arup@proligence.com
Twitter: @ArupNanda
Google Plus: +Arup Nanda
Facebook.com/ArupKNanda

Wednesday, October 26, 2016

OTN APAC Tour Wellington, New Zeland Sessions

Thank you all for coming to my sessions. I understand your time was important and I appreciate your taking the time to spend the time listening to me.






Here are the materials for my sessions:






1. Cross Platform Database Migration Using RMAN Slides
2. The Art and Craft of Tracing Slides, Scripts
3. Big Data for Oracle Professionals Slides






Like always, I would love to hear form you, not just that you liked something; but something that you felt you were not happy with. Email me at arup@proligence.com, or tweet me at @ArupNanda or Post a comment at Facebook.com/ArupKNanda

OTN APAC Tour Auckland, New Zealand Sessions

Thank you all for coming to my sessions. I understand your time was important and I appreciate your taking the time to spend the time listening to me.












Here are the materials for my sessions:












1. Cross Platform Database Migration Using RMAN Slides
2. The Art and Craft of Tracing Slides, Scripts
3. Big Data for Oracle Professionals Slides












Like always, I would love to hear form you, not just that you liked something; but something that you felt you were not happy with. Email me at arup@proligence.com, or tweet me at @ArupNanda or Post a comment at Facebook.com/ArupKNanda

Thursday, September 22, 2016

Slides for OOW16 Session Preventing Bad SQLs

Thank you for coming to my session Preventing Bad SQLs at Oracle Open World despite being at lunchtime on the last day.

Congratulations to the winners of my book.

Download slides here http://bit.ly/2cGJqR7

Friday, February 05, 2016

No Data Loss without Sync Data Guard

A few months ago I wrote about an exciting product that allows you to achieve no data loss in a disaster even without using the maximum protection Data Guard that relies on super expensive synchronous networks. Recently I had a chance to actually work on that system and replicate the scenarios. Here I explain how that actually works, with commands and actual outputs.

Background

First, you may want to refresh your memory on the concept of that product in the earlier blog http://arup.blogspot.com/2015/04/no-data-loss-without-synchronous-network.html The product is Phoenix system from Axxana. To reiterate, I am not an employee of that company nor in its payroll; I just want to make others aware of this innovative out of the box solution to a problem we are all familiar with. I usually don't write about specific products unless I feel strongly about its effectiveness to the user community.

Let's examine the issue we are facing:
  1. In a Data Guard environment, some amount of redo information may be left behind at the primary site.
  2. If the primary site meets with a disaster where nothing is left, this data is lost for ever.
  3. If this data loss is not acceptable then you have to run the Data Guard in maximum protection mode which guarantees the updates occurring on the primary to be reflected on the standby database immediately. That way the data is updated at standby site and is not affected by the loss of the primary site.
  4. However, this also means that the network has to be super fast and super reliable to ensure the updates at the standby site. Unfortunately this also means the network is super expensive. And this could also become technically challenging for networks across very large distances.
  5. This is why many corporations that can't afford the expensive network infrastructure do not implement the maximum protection Data Guard and merely accept the data loss as an unavoidable problem.
Well, until now. Axxana's product addresses the issue in a non-traditional manner. The only thing that stands between the full recovery on the standby site in case of a disaster and an expensive network is the redo information that has not been transmitted yet to the standby server. If this information were available to the standby host, you would have been able to recover it completely. The Phoenix system from Axxana is a solid state drive enclosed inside a case protected from common disaster elements such as flood, fire, bomb, etc. You put this special box at the primary site and create one member of the redo log group on its storage, as shown in the figure below. This box has a protected port of network, etc. that allows you to access its contents even if the normal ports are all destroyed by the disaster. It even has a cellular modem that allows it to be connected remotely. In fact the Axxana software can pull the necessary data from this special box remotely from the standby host and apply to the standby database. And the best thing about it is that it is the plain vanilla Data Guard you are familiar with; not any special magic that goes under the hood.
Axxana Blackbox
Fig 1. Axxana Blackbox Internals



Let's see how it works with a hands-on approach inside a lab. I have two hosts:



Host
Database
Primary
e02lora adb
Standby
e02rora1 adbsb


The storage is ASM; not a filesystem. Replication in a filesystem is relatively simple; so I deliberately chose it to see if any ASM nuances coming up. The Data Guard is configured with maximum performance, so there is no need for an expensive fast low latency network infrastructure. The setup has an Axxana Phoenix system which consists of three components:
  1. Blackbox - the "special" protected storage system.
  2. Collector - a small server at the primary site that collects the changes to the files under the Axxana protection. In this case, they are controlfile, redo logs and archived logs. I don't need to place any database files there because they are useless for recovery. Keeping this small set of files also makes it possible to put the protected files of many databases--not just one--inside a single blackbox.
  3. Recoverer - a small server at the standby site that receives the changes from the blackbox and keeps the files up to date when disaster occurs.

Setting up

First I ensured that the managed recovery process is running and the standby database is in MOUNTED mode.

Then I created a test schema in the production database.
e02lora$ sqlplus / as sysdba
SQL> grant create session, create table, create procedure, unlimited tablespace to arup identified by arup;
Then I created some objects and data inside this schema.
SQL> connect arup/arup
SQL> create table T (c1 number, c2 timestamp);
SQL> create index IN_T ON T (C1);
SQL> create or replace procedure INS_T (p_c1 in number, p_commit in number) as
2 begin
3 for i in 1..p_c1 loop
4 insert into T values (i,current_timestamp);
5 if mod(i,p_commit)=0 then
6 commit;
7 end if;
8 end loop;
9 end;
10 /

SQL> create or replace procedure DEL_T (p_commit in number) as
2 begin
3 delete T where mod(C1, p_commit)=5;
4 commit;
5 end;
6 /
SQL> create or replace procedure UPD_T (p_commit number) as
2 begin
3 update T set C1 =-1 where mod(C1, p_commit)=3;
4 commit;
5 end;
6 /

SQL> exec INS_T (400000, 5000);

SQL> commit;
After creating the test data, I switched a few log files and waited a bit to make sure the changes are replicated to the standby.

Simulating a Failure


Now to simulate a disconnected network. I didn't have any special tools with me. So, I did the next best thing: I disabled the ethernet interface.

e02rora$ su -
Password:
e02rora$ ifdown eth0

At this point the standby host will not receive the updates. All the changes made to the primary database will be located at the primary site only. I canceled the recovery process first.

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

I made a few changes to the primary data.

SQL> connect arup/arup
SQL> exec DEL_T (500);
SQL> exec UPD_T (500);

SQL> conn / as sysdba
SQL> alter system switch logfile;

SQL> conn arup/arup
SQL> select count(*) from T

COUNT(1)
----------
399200

The table has only 399,200 records. Since the Managed Recovery Process is stopped these final changes will not be replicated to the standby. I can confirm that by opening the standby in read only mode and checking the actual tables.


On the standby host:

SQL> conn / as sysdba
SQL> ALTER DATABASE OPEN READ ONLY;

Database altered.

SQL> SELECT COUNT (1) FROM ARUP.T;

 COUNT(1)
----------
400000


The standby has all 400,000 rows, compared to 399,200 rows in primary. The discrepancy is due to the unpropagated changes to the standby. At this point I simulate a failure in the primary by killing the pmon process.

e02lora$ ps -aef|grep pmon
UID PID PPID TTY STIME COMMAND
oracle 10184 10564 pty0 16:26:17 pmon
e02lora$ kill -9 10184


The primary database is down. The data in the redo logs is lost as well since they have not been propagated to the standby system yet.

Performing Recovery

Now that we have simulated a disaster, let's see how to recover from it. Remember, since the Data Guard configuration is Maximum Performance, the database can only be recovered up to the most recent log entry. With Axxana software, however, there is additional data that can be pulled from the Blackbox (the storage unit at the primary site that would not be destroyed).  But how do you--the DBA--know what files are available at the standby site, which are still left at the primary site, and, most important, how to get those into the standby host? It gets even more complicated since the Blackbox is just a storage device; you have to mount the volumes, and mount the ASM disks, etc. These may be fun when creating a brand new database; definitely not so when you are under the gun to recover the database and bring your business online.



No worries. This is where the Axxana software comes into rescue. I spun up the provided shell script at the standby site. This script contacts the blackbox at the primary, pulls the needed data and completes the creation of necessary files at the standby site. Once the files are at the standby site, all you have to do is to perform the typical managed standby database recovery to complete the recovery process. The best part of all? The script even gives you a step by step instruction sheet along with specific files names which you can copy and paste when Oracle prompts for it. Here is how I call the script and the resultant output.

root@e02rora1 AxxRecovery# ./AxxRecovery.sh
Logging to '/home/oracle/AxxRecovery/logs/axxana.debug.log'
Calculating Optimization Parameters [done]
Attempting to connect to Axxana's Recoverer [done]

Perform Failover?

Warning!
This operation terminates the Axxana protection for all DBs.

1) OK
2) Cancel & Exit

At this point the script pauses and asks me for confirmation. I enter "1" and the script continues:


Requesting Axxana's Recoverer to 'FAILOVER' [done]

Recoverer communicates with the Black Box (ENTER_FAILOVER_START) [done]
Recoverer communicates with the Black Box (ENTER_FAILOVER_BBX) [done]
Recoverer communicates with the Black Box (ENTER_FAILOVER_CONNECT) [done]

Recoverer is in FAILOVER mode

Transferring required logs files of the database 'adb'.

The script shows me the names of the files along with their respective transfer status and the %age progress.

FILE NAME | Size in MBytes | Stage | Progress
==================================|=================|==============|===================
ADB-group_3.258.902686913 | 50 (0) | Completed | 100%
ADB-thread_1_seq_111.266.902852033| 46 (46) | Completed | 100%
ADB-group_2.257.902686911 | 50 (50) | Completed | 100%
ADB-1454516139530A12435456XT_112.C| 11 (11) | Completed | 100%
ADB-group_1.256.902686901 | 50 (50) | Completed | 100%
==================================|=================|==============|===================
Total: | 208 (159) | 5 of 5 files recovered

All required log files exist at '/home/oracle/AxxRecovery/axx_recovery_files'

Preparing user recovery scripts ...

You can safely perform the Oracle RPO=0 recovery process

Please follow the RPO=0 recovery instructions that are described in the file '/home/oracle/AxxRecovery/axx_required_files/recovery_instructions.txt'


As I mentioned, the script creates a detailed step-by-step instruction sheet to be followed for the standby recovery. I am actually glad that it does not perform a recovery automatically. That is one process you want to watch and proceed with caution. You probably have only one chance at it; and rushing through it may force you to take missteps. But at the same time you want to think as less as possible under those stressful conditions. So, the detailed instruction sheet comes handy. The last line shows the location of the instructions files. Here is how the file looks like in my case, after removing some banner items:

|---------------------------------------------------------------------------------------------------------------|
| RECOVERY INSTRUCTIONS (03/02/2016 16:26:19) |
|===============================================================================================================|
===============================================================================================================

Step 1) Output setup
====================

Please log into the standby database (as sys user)
and then run the following commands at the SQL prompt:

SQL>
SET SERVEROUTPUT ON
SET LINES 999
SET PAGES 0

Step 2) Recovery using archive log files
========================================

Please run the following commands at the SQL prompt:
* (You can safely ignore the following ORA Error 'Managed Standby Recovery not active')

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

SQL> RECOVER STANDBY DATABASE UNTIL CANCEL;

-- when requested, feed the following file(s):

+DATA/adbsb/archivelog/2016_02_03/thread_1_seq_110.382.902852027
/home/oracle/AxxRecovery/axx_recovery_files/E02LORA1/BBX/ADB/ARCHIVELOG/2016_02_03/thread_1_seq_111.266.902852033

-- finally enter 'Cancel'

SQL> CANCEL

Step 3) Switching to primary control file
=========================================
Please run the following commands at the SQL prompt:

SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP NOMOUNT
SQL> ALTER SYSTEM SET CONTROL_FILES='/home/oracle/AxxRecovery/axx_required_files/1454516139530A12435456XT_112.C' SCOPE=SPFILE;
SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP MOUNT

Step 4) Renaming data and redo log file names
=============================================
Please run the following SQL statement

SQL> @/home/oracle/AxxRecovery/axx_required_files/logAndDateFileRename.sql

Step 5) Recovery using primary control file
===========================================
Please run the following command at the SQL prompt:

SQL> RECOVER DATABASE UNTIL CANCEL USING BACKUP CONTROLFILE
-- when requested, feed the following file(s):

/home/oracle/AxxRecovery/axx_recovery_files/E02LORA1/BBX/ADB/ONLINELOG/group_1.256.902686901
/home/oracle/AxxRecovery/axx_recovery_files/E02LORA1/BBX/ADB/ONLINELOG/group_2.257.902686911
/home/oracle/AxxRecovery/axx_recovery_files/E02LORA1/BBX/ADB/ONLINELOG/group_3.258.902686913

-- You should now see a message saying 'Media Recovery complete'

Step 6) Open the Standby Database
=====================================================
Please run the following commands at the SQL prompt:

SQL> ALTER DATABASE OPEN RESETLOGS;

Well, it's dumbed down enough for those stressful moments associated with a standby database recovery, down to even the set pagesize commands in SQL*Plus--quite easy to forget in stressful situations. Note it shows the time it got generated at the very beginning, non-US date format as dd/mm/yyyy. Anyway, I followed the instructions step by step. Step 4 requires some attention. It shows the how to change the names of the redo logs and data files after the switchover.

For the sake of brevity I don't want to show the entire output. Here is an excerpt from the tail end of the activity:

...
... output truncated ...
...
Specify log: {=suggested | filename | AUTO | CANCEL}
/home/oracle/AxxRecovery/axx_recovery_files/E02LORA1/BBX/ADB/ONLINELOG/group_2.257.902686911
ORA-00279: change 5172890 generated at 02/03/2016 16:16:56 needed for thread 1
ORA-00289: suggestion : +DATA
ORA-00280: change 5172890 for thread 1 is in sequence #114
ORA-00278: log file '/home/oracle/AxxRecovery/axx_recovery_files/E02LORA1/BBX/ADB/ONLINELOG/group_2.257.902686911' no longer needed for this recovery

Specify log: {=suggested | filename | AUTO | CANCEL}
/home/oracle/AxxRecovery/axx_recovery_files/E02LORA1/BBX/ADB/ONLINELOG/group_3.258.902686913
Log applied.
Media recovery complete.

SQL> ALTER DATABASE OPEN RESETLOGS;

Database altered.

SQL> SELECT COUNT (1) FROM ARUP.T;

COUNT(1)
----------
399200


Voila! The output shows "Media recovery complete". The count is 399,200--the same as the number in the production database. The recovery process got those last changes. I accomplished my objective without a synchronous replication.

Summary

Recovery without data loss has always been dependent on a high throughput, low latency, ultra-reliable network. Unfortunately the cost of this infrastructure often precludes the use of the no data loss setups in organizations. The Axxana solution is innovative in the sense that it addresses the issue with a completely out of the box solution slashing costs dramatically, in fact so dramatically that most corporations will be able accomplish no data loss in Oracle Data Guard setups. At the heart of the setup is the "blackbox" which is just solid state storage with protection to withstand water, fire and bombing. When a disaster obliterates the primary site, this blockbox likely survives and through its cellular modem allows its content to be pulled to the standby site to reconstruct the important files that have not been replicated yet. Once those files are retrieved, it's quite simple to recover the standby database completely. In this blog you saw an example of how I did exactly that as a user.

Hope this helps in making some decisions about no data loss setup in Data Guard with respect to cost.

Monday, November 09, 2015

NOUG Session: How Cache Fusion Works

For all those attended my session at Northeast Oracle User Group at Babson College in Wellesley, MA. Have you ever wondered how Cache Fusion knows where to get the block from? Or, how block locks vary from row locks? Or you are confused about the meaning and purpose of various Global Cache Service (GCS), Global Resource Directory (GRD) and Global Enqueue Service (GES). The session was meant to explain how all these actually work under the covers with live demos.


I hope you found the session useful and entertaining. As with all my sessions, I use slides as an aid to presentation; not to communicate the concepts. I have written an accompanying paper which explains these in detail. You can download the paper here. Or, the presentation here. All the scripts I used can be downloaded here.




As always, your feedback will be greatly appreciated.

Sunday, October 25, 2015

Oracle Open World 2015: The Art and Craft of Tracing

Thank you very much to those who attended my session The Art and Craft of Tracing in the #IOUG User Group track. It was a great pleasure seeing standing room only, especially on a slot right after lunch. I hope you found the session useful.


Download the presentation here and the scripts I used here.


As always, I would love to hear from you. Please use the comments space to give your feedback or send me an email, or tweet (@ArupNanda), or facebook post.

Translate