The only thing that you're missing is the idea that the database somehow knows what the data means and will beable to determine all the things the data will mean in the future.
For example: If users continue only insert unique data into a column does that mean that you can safely optimize the column to use a unique index?
What if that column was a list of employee names and the table held 'blog' entries. Each person only makes one 'blog' entry a month. For the first month the column has unique values in its index. Let's say your auto indexer sees this as an opprotunity to put a unique on that column. Now when the next month rolls around your users can't insert a new 'blog' entry because that would violate the uniqueness of the column.
Not only with the case of seemingly unique data not really being unique does it pose a problem to have an automatic system impose indexing. All forms of indexing pre-supposes a semantic constraint on the database that does not exist in its internal representation. It also presupposes a lack of concern over performance issues.
What if your program indexes a "bit" feild?
Using an INDEX on a database table is to explicitly identify time-space trade-offs for the database engine. Indexes aren't magical, they live on disk. To hit an index is to hit disk to save CPU for searching. To say "index everything" is to say "I don't care about I/O" to say "index nothing" is to say "I don't care about CPU load" and to say "Index this" is to say: "In this instance it is better to use I/O instead of CPU" and if you can write a program that reliably predicts when the trade-off is right that is something very valuable indeed... and it is one step closer to making programmers and DBAs obsolete.
If you buy the idea that you can create a heuristic auto indexer then you aren't far from believing that computers will (eventually over the eons of time) program themselves and all programmers and DBAs will be utterly obsolete and out of jobs. And robots will rule the earth.