You don't optimize anything -- code or your database -- without first understanding exactly where your performance gains will be. Use object oriented design when you use a database. I've seen people create all sorts of objects, then put direct SQL calls in the middle of their code. Grrr.
Remember the whole MVC style of programming? THIS IS WHAT IT IS ABOUT! Write your code to be easy to maintain and with good design. Push all database context to object models. That way, when you restructure your database, you don't have to recode your entire application.
Once you've finished coding, you can start optimizing both the code and the database. A good SQL database has all sorts of tricks to make what seems like multiple table lookups to be a single operation. Deciding not to normalize your database because you think it is inefficient is a mistake.
In the mid-70s (yes, I'm THAT old) I worked on a rather popular computer called the Cado. The Cado could do things that took other computers costing 10x as much. It was the first computer that many businesses could actually afford.
The secret was the architecture: Cado made the assumption that hardware cost much more than software, so they optimized the hardware and that optimization was reflected in the OS (it had no file allocation table -- we tracked that by paper and pencil) and the software which required developers to use scant resources and write their programs in 256 byte segments. This allowed four users to use the Cado in only 48K of memory.
You heard of Cado? Nope, they went into a tailspin in the mid-80s because the primary assumption of hardware cost vs. software cost ended up being wrong. By the 1980s, hardware costs dropped and it was software that became the major cost. Overly optimizing the software meant that the programs were almost impossible to maintain. It also meant that programs were written for specific clients, so that you had 20 programs for 1000 clients. Meanwhile, software written for maintenance, scalability, and configurability could be written for millions of clients. Thus, spreading the programming costs over a much larger sales base.
By the mid-80s, the specialized hardware needed to run the Cado programs meant you couldn't take advantage of the PC mass market. A PC would cost $3,000 while the Cado hardware would cost $15,000. Cado programs would cost thousands of dollars while the equivalent PC programs could be had for $600. Hundreds of Cado VARS went belly up as their sales disappeared. No one wanted to spend $20K on a system that was slower, more buggy, and harder to maintain than an equivalent $7,000 PC based system.
So, don't do what Cado did and make stupid optimization assumptions. Write good code and then test where optimization might actually do some good. Otherwise, you'll be writing yourself in a corner.
I recently worked on a project that had a distributed database where inventory items were stored in local databases. To know which distributed table to fetch the item from, the object's item number would include the location where that item was stored. You knew the item number, you knew which database where the item was stored, you could do a single lookup to fetch that item.
What a great idea until inventory items were moved from one location to another. That meant you had to change the inventory number of the item which meant rewriting the entire inventory table, all transactions (since the inventory item number was in the transaction), history tables, customer tables, etc. It ended up that about 2 to 3 dozen items would move every month. So, every month, the entire database which was distributed in over 60 locations would have to be updated. So, that one single database optimization technique cost us several days of down time each and every month.