Sure - it’s hard to write a bugfree application, but you can - by design - make sure that errors don’t cause data corruption or failure. And testing, QA and unit testing will help you to get you to a point where the two or three litte errors at least do no harm.
I am - as most other who commented here - a software developer for embedded systems. Crashes or degrading behaviour is not an option. My, and our life might one day depend on code I’ve written.
One trick to get stable software is to hande errors as they occur and propagate them up to the caller. You will at a higher level of abstraction have the chance to handle hard errors such out of memory. Had bad luck displaying an incomming message? Maybe next time you have better luck because some background process has finished and freed some memory. If not - give up after x tries but keep your data intact.
How does the alternative looks alike? You got a null-pointer, didn’t handled it. Some code calling you expected that everything went well and writes some data into nirvana. This could cause funny behaviour or as well crash the entire system.
All vital subsystems shouldn’t use dynamic memory or write to files anyways. They are ore or less autonom as long as noone passes garbage to them or corrupt their data via bad pointers. What could go wrong on his level is just data corruption due to writes from the outside (use asserts here during development time) and floating point NANs creeping up and propagating themself into the guts of your database.
You have to check for those! Errno and low level exception handling is your friend here.
It looks like a difficult and time consuming process to follow these rules, but it is not. If you do it from the start it’s quite easy, and after a month or two it becomes second nature.