Most databases allow NULL values in a unique constraint. I like this because sometimes you really do need to do that. The problem is that they don’t consider NULL as a unique value in that constraint. That means that you can have multiple rows with the same “values” (I know, NULL is not really a value), but in some uses it is.
I have a table with four columns that need to be maintained as unique, but where some of the columns could be NULL indicating a total lack of value for that column. Due to the proprietary nature I can’t give the exact table details.
The columns are parent_id, metric_id, object_id, item_id
parent_id and metric_id are NOT NULL, they must have a value. object_id and item_id could be NULL indicating a “default” value – in other words if you want to lookup something and you can’t find a match on all four columns, you can search for a partial match where object_id and/or item_id are NULL. Objects and items have foreign keys to their parent tables so they have to have valid values, I can’t just put an empty value in there.
This means I can have
parent_id metric_id object_id item_id
1 10 2123 39392
1 11 NULL NULL
1 11 NULL NULL
and that is not valid because I now have two “duplicate” values. I’m trying to think of a reason you’d ever want to allow that type of duplicate.
From what I’ve read SQL Server would not allow this, so kudos to SQL Server.
Feel free to add your thoughts or ideas.
Do you need to run a query with a date/time or timestamp comparison using interval math but have the value for the time period in another table? Do you wan to do interval math with sub-queries? You’d might be surprised, because in MySQL it really is as simple as it sounds. Click here to see…
If you have excessive growth of the SYSAUX tablespace or have encountered “ORA-1688: unable to extend table SYS.WRH$_ACTIVE_SESSION_HISTORY” then check this out. Read more here…
Well, last Friday was a good day. I was able to recover a major system process that had been plaguing one of our systems for a few days. It took some time because part of the issue is a hardware problem. Looking at what this application did brought back those memories of other systems will similar issues and I found myself saying “[Theoretically this isn't supposed to work, but I want to try this because the scenario looks correct]“
Once more the problem was “bad” indexes. In theory a b-tree index should never “go bad”, but in my experience they do. I’ve seen it too many times on tables with huge numbers of inserts and deletes. Partitioning is the best solution, but when that hasn’t been done in the design and the system is down now, then I’ll always try rebuilding indexes. Worst case you burn some CPU and maybe have to add a little space while it is running. Best case, you save the day.
Now we just have to hope they figure out how to get the hardware working!
So, here are two pages I’ve written and posted previously. Both of which apply.
Breaking two myths about rebuilding indexes in the Oracle Database
How to find the number of inserts, updates, and deletes on an Oracle table
If you are NOT looking for SQL%ROWCOUNT but want to know how many DML statements been done over a period of time, then just click here!