Heart Bleed, Richardson Lucy and Anti-Fragility

I am no expert in internet security.  I am also fairly new to open source so I don’t claim to understand all the complicated factors that led to the heartbleed bug.  From what I understand the problem occurred because of a bug in a piece of widely used open source code.  There is quite a bit of discussion on heartbleed and issues of open versus closed.  But to me another issue is centralized versus decentralized, or fragile versus anti-fragile.   There is a good article by Christina Warren on the “open versus closed” issues  here.   She mentions that large, popular open source projects are funded 3 ways.

  1. Donations from individuals, volunteers (by time or coding abilities) and non-profits.
  2. The project is funded and steered by a commercial entity or entities.
  3. Corporations who use and benefit from the project hire employees who are dedicated to working on the project full-time.

Note her term “the project”.   Perhaps there should also be some emphasis on “their project” as well as “the project”.   Corporations could borrow from “the project” and give back to “the project”, but at the same time be responsible for “their project”.  Better yet corporations could borrow from “multiple projects” and give back to “multiple projects” to efficiently build “their project” (in this case a customized, decentralized and decoupled security solution).  In turn, “their project” becomes another source that can be borrowed from and contributed to.  The system becomes decentralized.

Like I said, I don’t know much about internet security.  I know more about signal processing.  Lately I’ve been tinkering with an algorithm called Richardson-Lucy which is an older and fairly standard deconvolution algorithm.  I’ve tested several implementations of this algorithm and wrote one of my own.   Why did I reinvent the wheel?   There are many reasons including the need to be compatible with another system I use and the eventual need to target different computing architectures (such as GPU).  It then becomes important to test different implementations against each other as to make sure they behave the same numerically.   My own implementation is not a complete re-invention.  I studied, analyzed, built, and ran the other implementations.  So mine has inherited it’s DNA from the others.

I wonder how so many systems wound up using the exact same “heartbeat” code.   Instead of getting a piece of code from one centralized code base shouldn’t people get multiple implementations from decentralized sources?  Currently it seems people think in terms of “re-using” something like openSSL.  Beyond just re-using code, we should also focus on borrowing from, improving on, modifying, mutating, comparing and reimplementing code.    Reuse is efficient, but we want systems that are anti-fragile: Philosopher/Statistician Nicholas Taleb says

Modern societies: efficiency demands are pushing the structures to the maximum, so a little sand in the cogs make the whole edifice totter.

We demand efficiency so we all use the same code but perhaps there is too much emphasis on efficiency.   Robust systems (such as in nature) have redundancy and variability.  These properties should also be desirable in software ecosystems.

Leave a Reply

Your email address will not be published. Required fields are marked *