Code Like a Rocket Scientist
Do. Or do not. There is no try. -- Jedi Master Yoda
The Seven Secrets of How to Think Like a Rocket Scientistby Jim Longuski is a great book. It's almost a pattern language for innovation and getting things done. (I reviewed it in a little more detail here)
Longuski's intent is clearly to crystallize some useful patterns from the life of a "rocket scientist" and recommend them for general use. He successfully pulls it off and in a way that doesn't come over as elitist or sycophantic.
As a little thought experiment, I was interested to see how one might apply his ideas to the arena of coding or software development.
The "seven secrets" actually encompass 50 lessons in all, so this post soon got a bit out of hand!
Anyway, this is what I came up with ... so enjoy, and let me know your own thoughts!
You'll never achieve what you can't even imagine. That's good general advice for life. When it comes to our software efforts though, how much time do we really spend letting our imaginations run wild before shifting to a convergent thinking mode and start drilling down on the solution?
Work on the Big Picture
Every journey begins with the first step; a cathedral gets built brick by brick. But without the big picture, we are lost with a pile of bricks.
Arguably the technology world is too obsessed with the big picture. Think frameworks to end all frameworks, and our constant reinvention of the perfect computing paradigm: client-server to network-computer to EAI and SOA.
But perhaps that is a sign that we don't spend enough time deeply thinking out the big picture and thus we are stuck leapfrogging from one 80% solution to the next. Who is the programming equivalent of Einstein with his general theory of relativity and lifelong search for a unified theory of everything? Or are we still waiting for that person to be born?
It goes without saying that you must aim high in order to achieve great things.
In software, I think it's actually rather common to have high goals in terms of functionality. Often our problem is that the goals are found to be too high at the end of the day. Massive scope slash, pulled features and death-marches result. But remember this is the "Dream" stage. If there is a failure, it is not in the dream, it is skipping straight to "Stage 7. Do"!
One area where our dreams tend to lack ambition is quality. Peopleware has a lot to say on this subject!
Encouraging a bit of BS is a good way of knocking the stiffness and formality out of the process. Which is what you need if you are looking for creativity, even if your code is "serious stuff".
There's also good logic here. When you invent the most outrageous porkers, you are probably using the technique of inversion - you are purposely searching for inspiration outside the nominal constraints of your problem. In other words, the bullshit artist naturally lives outside the box. Which is where you would expect to find the breakthrough idea of course, right? Make sense, or is that just a prime example of good BS itself!?
Tell a Story
Sleep on It
Literally. As in: work on a problem. Go to bed. Wake up in the morning and find the solution is within your grasp.
"Sleep on It" is common folklore, but Longuski's own experience lead him to believe it held some truth.
In Brain Rules, John Medina presents more fascinating research that demonstrates the effect. Not only are problems more easily solved, but having slept on a problem, you are more likely to make a creative leap to a better solution.
Interestingly, Medina's work also demonstrated proved the truth another bit of folklore: the natural orientation of some people to be early birds, and others to be night owls.
So in a software development setting, we would be wise to take advantage of this knowledge:
- If you are trying to crack a hard problem, pulling an all-nighter is a dumb thing to do. Work it, get some sleep, then come back to it in the morning.
- Know who your night owls and early birds are. If you want the worst productivity possible, making night owls struggle in at 7am and early birds work late into the night.
Who is the JFK, the visionary, for your software project? The one person you all turn to, who helps you believe the impossible might actually be possible, and more importantly nurtures the desire to go for it?
Not all projects merit "man on the moon" scale leadership of course. But personally, I'm not interested in working on a project that doesn't have a clear goal. And all goals require at least a little vision to be tangible.
So whatever the project, its worth thinking about whether you need a 2 foot, 5 foot or 6 foot JFK. And who will that be?
Is it you? Are others waiting for you to stoke up the courage to play that role? Carpe diem!
DeMarco & Lister coined the brilliant term "Management By Hyperbole" in Peopleware. Which I think hits too close to home in many software development shops. Everyone is going to be the next Google, right?
But sometimes you just need to get real, toss the crackpot ideas, and get on with it.
.. and then get back to work please. You are not Matt Broderick.
The tooling in IT is getting to the stage where simulation is a realistic possibility in routine development. Take Oracle's Real Application Testing for example.
Run a Thought Experiment
The cheapest prototype of all.
Know Your Limits
I'm an optimist without illusions -- JFK
Postscript: Michael clued into the fact that this quote is wrong; it should be "I'm an idealist without illusions". Either I mis-typed or the book is wrong. The sentiment remains however, or we could now quote Obama with "I'm An Optimist, Not A Sap" ;-)
Ask Dumb Questions
Ask Big Questions
Ask "What If?"
Ask "Animal, Vegetable or Mineral?"
Ask Just One More Question
Prove Yourself Wrong
This immediately brings testing to mind. Make sure you put on the black hat (in both the hacker and deBono sense) when unit testing. Don't just write tests that prove things work, write tests for the failure modes and boundary conditions too.
But we know that already, and usually try for some level of rigorous testing.
Where we probably need to take this advice more to heart is at the front-end of the process: requirements, architecture and design.
Have you ever seen a new architecture, specification or design get approved with only cursory review? And perhaps reviewed by people who were not really motivated or perhaps even qualified to do a thorough job?
I have. And you know who will suffer for it: the brave team charged with implementing and supporting the system.
As we play the role of analyst, architect or designer we should make time to do a thorough critique. Try and break what you have conceived. Admittedly, that can be a hard thing to do. So invite a colleague to help you to try and break the design - an invitation not many would refuse! Do it, even if it's not in the project plan.
Inspect for Defects
The IT industry has adopted the quality mantra from manufacturing, but unfortunately our adoption is still highly selective.
The typical software testing method is akin to classical "quality by inspection".
Poke it, prod it, and raise a ticket if it breaks.
That's about as advanced as weaving textiles with steam-driven looms.
We are seeing more practices that emphasise "quality by design" however, and that's a good thing. For example, the test-first and test driven development movements try to help us to assure quality right at the first gate.
A big area for improvement seems to be a general focus on addressing systemmic factors in quality. Fix a bug? What about getting to the root cause of why that bug existed in the first place? Simple techniques like 5 whys can help get us out of a rut.
Have a Backup Plan
I learned this lesson good while doing IT support. Desperate users, desperate situations. You soon discover the importance of always knowing how to retrace your steps, and always having a contingency when you are about to do something that could smoke the users' data (or the hardware).
When the walls are buring, there's nothing like the confidence that comes from having backup plans well-ingrained as your standard practice. It can mean the difference between cool, calm success and landing a bloated fail whale.
It is the same idea I adopt when using source code control. I'm no fan of the school of thought that says only commit at the end of the job. Commits are my breadcrumbs that mark every step, and prevent bad changes from invalidating a whole coding session.
Do a Sanity Test
A good technique to apply to the next social-network-crowdsourcing-prosumer-2.0 business plan you see.
But equally valuable for more technical work. Prime candidates for a shrewd eye are server sizing, that convoluted custom framework design, and integration architecture.
Check your Arithmetic
Know the Risks
Most projects have some form of risk analysis process. Unfortunately, in my experience, most are cursory point-in-time assessments, with a half-hearted consideration of mitigations. You might do just as well getting a tarot reading.
Risks are like enemies ... hold them close, and know them well.
Thinking back over the recent projects I've managed, I realise that they all have some aspect of the plan that has been specifically arranged because of the understanding of the risks we faced. My three favourites:
- Scheduling specific proof-of-concept activities at the very front of the project to eliminate risks or put mitigations to the test.
- Including a performance test, whether the client required it or not. (every well-run performance test I have been involved with has identified at least one issue worth fixing. An issue that would otherwise have gone undetected prior to launch)
- Re-sequence project activities to address high risk activities as soon as possible.
Question Your Assumptions
Keep It Simple, Stupid
Draw a Picture
Make a Mock-up
Rapid prototypes are good. Probably the single biggest reason why Visual Basic 3 rocked the world. Suddenly, anyone could mock-up a working GUI app in no time at all.
Now we're seeing some fantastic examples of this same capability for the new internetworked world. WaveMaker may be the new PowerBuilder for the web. And heroku is making Ruby on Rails a hosted no-brainer.
It's an interesting co-incidence that the original Visual Basic was derived from a programmable form system, code name Ruby.
Name the Beasts
Look at the Little Picture
Do the Math
Apply Ockam's Razor
Minimize the Cost
Minimize the Time
Be Mr Spock
Make it Faster, better, Cheaper (but not all three)
Isn't the paradox that almost defines the software industry and the bane of the fixed price contract?
In fact, classical project management long held that projects are defined by the triple constraints of scope, cost, and schedule. What was unsaid of course is that a fourth variable - quality - was assumed to be constant.
Which is of course a great joke.
But the even bigger joke is that most organizations still tend to behave as if all four variables can be dictated at will.
Kent Beck delivered the seminal piece (in the software context) addressing this issue with his rather radical proposal for Optional Scope Contracts. This is eminently sensible and usually the best reflection of the real objectives on the ground ("we only have this much money to spend on the project; must deliver something within 3 months; and can't compromise on this level of quality. So what we can adjust along the way is: scope!")
This is a key topic covered in the excellent book, Extreme Programming Explained.
Know When Bigger is Better
We have Moore's Law to blame for making this a perennial issue in IT.
Our systems comprise hardware and software. In crude terms, we seek the optimal balance between throwing hardware at the problem and choosing the most productive development tools.
The most exciting trend we are seeing now of course is the mainstream adoption of dynamic scripting languages (python, ruby and php) in problem domains that were once the exclusive province of "proper" (compiled) languages.
Let Form Follow Function
I'll resist making a joke on designers with their iMacs, photoshop and flash.
In truth, the leading ideas in graphic/web/interaction design are spot on. Check out Robert Hoekman's Designing the Moment for example (highly recommended).
But let's talk about coding for a sec. Have you ever spent time polishing code, refactoring, or building a really cool bell or whistle ... before you've got even the most basic features working properly?
Test-driven development, and an agile planning method like scrum are good anti-dotes for this!
Pick the Best People
Fred Brooks highlighted the huge variation in individual programmer productivity in the classic The Mythical Man-Month: Essays on Software Engineering, Anniversary Edition (2nd Edition).
For one of the best treatments of how to work with this reality, Peopleware is the canonical read:
The final outcome of any effort is more a function of who does the work than of how the work is done.
- get the right people
- make them happy so they don't want to leave
- turn them loose
Make Small Improvements
Learn by Doing
Real programmers know that what the docs say can be a universe away from how things actually work.
The only way you get to find this out (and hopefully make it all work) is to roll up your sleeves and do stuff.
Similarly, I believe that the only good architect is a practicing architect. If you are not taking every opportunity to work on your craft and keep current, you are ossifying.
The non-practicing architect is typically engaging in what I call "retrospective architecture": looking back at what worked in the past, codifying it as an "architecture" and promulgating it as a "standard". That's putting innovation in the deep freeze, and needs to be shunned!
Sharpen Your Axe
The best, smartest and most creative developers I know are really lazy. In a good way: work smart, not hard.
So while your "good" developer may spend 3 hours cutting and pasting SQL statements around to generate a test date set, the "lazy" developer has whipped up some sql/perl/awk script of Excel sheet that does the same in 5 minutes.
They love their tools, keep them good and sharp, and always on the lookout for a new one. Sharp tools make quick work of repetitive grunt work.
Correct It on the Way
It is salutary to learn that for the first US planetary mission to Venus, the rocket scientists actually argued about whether course corrections would be needed on the way. Thankfully, to avoid missing Venus by a million miles, trajectory correction maneuvers (TCM) became standard practice. It is of course how "smart" bombs work too.
And the same applies to software development. No matter how perfectly you have things planned out, there will always be some unpredictable external influences along the way. If you are not able to respond, then the outcome is no longer under your control.
This is a core belief of the agile movement, and enshrined in the agile manifesto.
JFDI aka NIKE!
Doing software is a natural procrastinator's heaven on earth (or hell, depending on how firm those deadlines are).
As a well-practiced procrastinator and perfectionist myself, I've learned that "Do Something" is best advice ever.
The strange thing is ... it doesn't matter so much what you do (even to the extent of temporarily switching to another project or task). [this technique is discussed in another book I've read, but I forget the reference for now]
The important thing is you build some momentum and get moving. The more you do, the more opportunity you will have to explore alternatives and backtrack if necessary.
Don't Ignore Trends
Work on Your Average Performance
Software development is well known for its reliance on crash schedules and bucketloads of overtime to get things done.
For short periods of time, it may be a necessary and effective course of action to get you out of a desperate hole.
But over the course of a year, how does the productivity of such a shop compare with one that is operating at a more steady, controlled pace?
More than likely, the shop that just focuses on the bursts of peak performance, has a terrible long term average performance. Not only is average performance out of control because no-one is looking, but it's taking a pummeling from the burnout, turnover and very deep troughs in performance between peaks.
In software development, we should be much more concerned with improving our average performance than maximising our peak. "Situation nominal" as Houston would say.
Look Behind You
In software we conventionally think of using project postmortems and knowledge sharing initiatives to capitalise on our history. Everyone pays lip service to this idea, but in my experience few companies effectively practice it.
This is especially true in organisations driven by the quarterly numbers. It is easy to understand why: there's an inherent disincentive built into the system. The project is delivered and the value is captured, so a postmortem is just a cost that may not have a return for who knows how long (and you are not even sure who the beneficiary may be).
But even if the organisation doesn't see the value, as professionals you would expect we would take the initiative, right? Unfortunately not, because these project-level retrospectives tend not to make a great deal of intuitive sense. Have you ever delivered the same project twice? Isn't technology changing so rapidly that there's more value in learning the new, than tilling the soil of the old?
All these factors conspire to work against our ability to learn from the past at the organisational and individual level. Which is really bizarre when you think about it, given that being successful in the software industry almost demands a focus on continuous learning.
Personally, I think we have two powerful weapons at hand to battle this tendency:
- The patterns movement, which allows us to understand generic concepts and package them for reuse. We may never do the same project twice, but we will use the same approaches many times.
- The agile movement, which emphasises incremental effort, and an ever-increasing range of collaborative tools (all that "Web 2.0" stuff). Do your learning and sharing all the time. If you leave it to the end of the project, you already know when it will get done ... never!
Learn From Your Mistakes
Never! Or get the book and you might have a chance.
read more and comment..
Oracle Community on Stackoverflow?
Stackoverflow quietly moved into public beta last week, and I'm stunned by how active it is already.
I'm looking at pages of really funky technical questions here that I haven't a clue how to answer ... and they all have at least one answer in response already.
There are even 106 questions in the "oracle" category.
If you haven't checked it out yet, Stack Overflow is simply a "programming Q & A site". As they say in the FAQ:
What's so special about this? Well, nothing, really. It’s a programming Q&A website. The only unusual thing we do is synthesize aspects of Wikis, Blogs, Forums, and Digg/Reddit in a way that is somewhat original. Or at least we think so.
I have big hopes for this site. The best developer communities I ever participated in were on the old network news/nntp, until it started getting overtaken by the web in the late nineties. Ever since then I've never really found an "optimal" community. It's either everyone (aka google), very specialist mailing lists, or web forums that tend to be too fragmented or low volume to be really useful.
I think this site has great promise to be a well-known meeting place for the world-wide developer community to collect and share knowledge. And I hope we see a huge "Oracle Community" presence (Open Metalink+Forums+Wiki 2.0).
There are two things that have really interested me about this site:
Firstly, it was started by Joel Spolsky and Jeff Atwood (Coding Horror). Nuff said.
Second has been the community engagement during the development process. They've had a podcast which I've been listening to for the past 21 weeks. It's a great fly-on-the-wall kind of experience, having the chance to listen to the developers discuss the site while they are still building it. I hope we hear more development done this way.
Do you have a question or maybe some answers? I really recommend everyone should take the plunge and test it out.
Any site that spawned a parody site even before it was launched can't be all bad!
read more and comment..
Think Like a Rocket Scientist
|I've been lax in my little posts about books I've read. One of the reasons is that I'm now addicted to bookjetty, which makes it sooo easy to track my reading and think "I'll review/blog it later". The other reason is simply time.|
But reading Jim Longuski's The Seven Secrets of How to Think Like a Rocket Scientisthas prompted me into action again.
This is a great book on practical innovation, and generally just getting things done. Although it takes the "Rocket Scientist" as the model (understandable, since Longuski is one), it largely avoids the trap of being elitist and sycophantic. It's just an honest and thoughtful analysis of how rocket scientists work, and presented almost like a pattern language for knowledge workers.
The "seven secrets" are actually seven stages of the creative process, from the initial idea generation through to delivery. Each stage includes half a dozen or more "secrets" (or patterns), so the book is more like "The 50 Secrets of How to .."
Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius-and a lot of courage-to move in the opposite direction. -- Albert Einstein
When you find a good move, look for a better one. -- Dr Emanuel Lasker
Do. Or do not. There is no try. -- Jedi Master Yoda
It is often said you can lie with statistics. But-it's even easier to lie without them -- Jim Longuski
PS: I since wrote a reflection on this book called Code like a Rocket Scientist
read more and comment..
Reflections on a learning model
The conscious competence learning model has uncertain origins, but is probably the best known model for learning. Maybe that's because it is so simple and intuitive - I suspect making it exactly the right kind of 'model' to be picked up by the business book and management consulting fraternity.
It seems to me best applied to the development of "skills" (like riding a bike or programming in python), and less so to changing bahaviour or habits (like giving up smoking).
But for skills it works really well, and the simple 2x2 matrix of conscious-competence yields lots of interesting observations to ponder.
That's my version of the matrix. I re-label the "conscious" axis as either "self-conscious" (as in you are painfully aware that you can't do something), or "automatic" (where you have reached the stage where performance is reflexive).
Where you start, where you get to, and the path you take are really dependent on the situation and the individual. In the picture above, I've indicated a starting point of where you are self-conscious about the fact you can't do something; although the literature talks about the strict theoretical starting point of being totally unaware you can't do something (automatic - cannot do in the diagram).
So anything interesting to note?
- Progressing from knowing you can't do something to thinking you can (the redish line above) is, I think, a perfect defintion of what we call "blur like sotong" in Singapore
- How straight-line your pregession towards automatic-can do is probably a good guide of "natural ability"
- Learning (or training, education and guided practice) tends to shift you up the scale of competence only - since it is more about giving you the knowledge and techniques to do the job 'right'
- Experience (or practice with reflection) tends to move you up the conscious scale towards the point where it is automatic.
So is this model of any practical use? As a point of reflection on your own, or your collegues situation, I think it can be a good but crude diagnostic. It makes you remember things like just plain training needs to be coupled with real experience to get you all the way up the curve.
read more and comment..