[Last Call] Learn how to a build a cloud-first strategyRegister Now

  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 1234
  • Last Modified:

BCNF in SQL Server

May i know what are the disadvantages of using Boyce-Codd Normal Form?
  • 3
  • 3
3 Solutions
Boyce-Codd Normal Form is good for oltp application
it is less good for datawarehouse application since the data is normalized, it is splitted in many tables, so the disadvatage is when you want to perform a select that perform lookups (that is, translate codes to thier descriptions) you will need to join a central table with many lookup tables and the query will be slower compared to a case where you store all your information in a single table
Chris MConsulting - Technology ServicesCommented:
I won't mention the advantages but since BCNF leads to speperation of related data and keeping the integrity by linking, then it's definitely going to lead to slow querying when the SQL engine tries to fetch all necessary data that's linked.
Queries always slow down with the number of joins you you ask SQL server to make in the database, therefore data retrieval will definitely be slower.
Secondly, DMLs (inserts, updates and deletes) will also slow down with the number of tables a single DML has to hit.
Since Selects are slow, views (build on select statements) will also be slow.
On top of being slow, you definitely will have more tables in the database because of normalisation. This might not be a serious problem for a small database but it will definitely be felt when it comes to databases that have thousands of tables. It causes mainenance costs.
It will defintely cause query complexity especially when there are many points of references.
Because of normalisation, sometimes even the pros end up doing unnatural designs.
NB: the fastest databases are less normalised especially data which is highly queried and modified.
I know this will fully answer your query :-)
Chris Musasizi.
>> what are the disadvantages of using Boyce-Codd Normal Form?

Disadvantages compared to what? A potential disadvantage of BCNF compared to 3NF is that BCNF is not guaranteed to be dependency-preserving whereas 3NF is. However, in practice BCNF is dependency-preserving in many if not most cases.

Normalisation has nothing to do with performance. It is misleading and pointless to try and generalise or speculate about performance without looking at the specifics of the data, indexes and queries.
Technology Partners: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

Chris MConsulting - Technology ServicesCommented:
Hi dportas,
unless you have very little data in a database, the more joins you make, the slower your data retrieval will be. This is quite evident even on smaller databases by the way so that's straight forward.
For the same reason (in favor of performance), database architects choose not to normalise some data to BCNF when designing very fast database systems for enterprise work with the same hardware.
Normalisation definitely will affect the way you will write your queries, the way you're going to index your data and definitely  this will affect retrieval and other database work.
I think generalising is more reallistic, shows that this does not matter on small systems but has serious impact on huge systems.
All these things are covered well in performance tuning chapters and extensively in all courses for database architecture.
I hope this sheds more light into what I stated earlier.
For more and more, you can have a look at http://www.sql-server-performance.com/ or www.microsoft.com/sql or www.sqlservercentral.com
I hope this was helpful, thanks.
By definition a database not in BCNF has more data redundancy than a database in BCNF. This means the total size of the data usually *increases* when you denormalize, the potential workload is greater, more physical reads and writes are likely to be required and so on. Typically, the larger your data the more important it is to eliminate redundancy through good design practices like normalization.

You mentioned performance tuning, which is all about modifying *physical* implementation and *physical* operations, because that is what determines performance.  Logical operations (joins) don't necessarily correspond one-to-one with physical ones (scans, seeks, etc) and some logical operations can be eliminated altogether by the query optimizer.

While it is true that logical design changes may be made as part of a strategy to optimise a database, I hope you can see that you are making an awful lot of assumptions when you say that a normalized design will work "slower" than a denormalized one. For some queries you will be right. In many other cases you will be wrong. So the purpose of my original answer was to explain that it all depends what you are comparing it to and what you are trying to achieve.
Chris MConsulting - Technology ServicesCommented:
You're right to say it depends on the nature of the data. It's not definite that you will always arrive at low performance on querying when you normalise data, that's why i chose to generalise.
In other words, it really depends on how the data looks like, the size or the data etc. Sometimes it's faster to normalise, other times it's not.
Let's not only look at one side of the coin, I agree denormalisation increases the volume of data but it may sometimes not have a serious performance impact just like the opposite may not.
The surprising fact is that it's possible to have a smaller database performing worse than a larger database despite the fact that smaller databases often perform better.
Generally speaking, it stems from volumes of data but the nature of data and how it's normalised becomes a serious factor too in determining query performance especially as data grows.
The best way to know your status is by testing your performance after normalising and then getting to the optimum state is by demormalising to a level where your database has the best level of performance.
After the above two done, then you are ready for more physical optimisation, the two can go hand in hand but it's best to have an optimised design which simply grows.
The bottom line is it's not always a positive answer.
Lastly, performance tuning does not only stop at physical implementation of operations. It covers query tuning as well which is purely logical.
Enough said, thanks for the comments.
Good comments. It was the "definitely slow" comments that I didn't agree with.

I generally design to at least BCNF (and usually 5NF) and make changes only where there is some proven benefit to doing so - or as you put it: "testing your performance after normalising and then getting to the optimum state". For most OLTP databases, BCNF works very well (as momi also pointed out) and is pretty much the norm in my experience. 5NF is good discipline if your schema is subject to a lot of incremental change and in practice many BCNF databases are also in 5NF anyway.

Featured Post

New feature and membership benefit!

New feature! Upgrade and increase expert visibility of your issues with Priority Questions.

  • 3
  • 3
Tackle projects and never again get stuck behind a technical roadblock.
Join Now