You can't be an effective engineer unless you understand how your engineering role impacts the business as a whole. Step back, get a new zeitgeist, and expand your horizons a bit.
Published in Embedded Systems Programming, May 1994
|For novel ideas about building embedded systems (both hardware and firmware), join the 27,000+ engineers who subscribe to The Embedded Muse, a free biweekly newsletter. The Muse has no hype, no vendor PR. It takes just a few seconds (just enter your email, which is shared with absolutely no one) to subscribe.|
By Jack Ganssle
Recently I had to dismiss an engineer, not so much for technical incompetence as for an inability to make his goals match those of the company. Technical people sometimes drive me crazy; I want to grab them and try to shake some common sense into what all too often seems like thick skulls. Too many are blinded by their egos to the realities of business life that even clerical staff are finely attuned to.
In my mind, the fundamental rule of working in any sort of team is to strive to achieve the goals of the organization. It doesn't matter if the group is a business or the YMCA, the chamber of commerce or a book club. Everyone involved must pull together to succeed at that which is important. I have no experience with large groups like Fortune 500 companies or the government. Perhaps in these cases keeping a low profile is more important than working hard. Smaller companies, though, are often profoundly influenced by the actions of each participating individual.
Never forget that every decision you make in the course of your career must be made in the context of the business's goals. Perhaps over the short term an objective might be to get a widget shipped by some date. Fixing a critical bug might be your only concern for a week or a month. Certainly all companies share one overriding objective: to make a profit. As an engineer in a high tech firm, profitability is something that should influence your day-to-day technical decision making.
Too many people think a synonym for "profit" is "greed". This is simply not the case. Without profits no business survives. Without profits jobs are lost. Without profits no one gets raises.
Larry Constantine and I have had several lively discussions of a secondary goal that we both feel is just as important. Have fun! Life is short, so if you can't have fun at what you are doing, change something - tools, project, company, or maybe even career - to enjoy that which you'll spend the bulk of your life at.
Whoa, you say - I'm just a small fish in the feeding chain with no effect on profitability. I contend that this is just not true. Every decision you make has a small or large effect. Adding an extra hundred lines of diagnostic code to save time in test will reduce profits by the amount of time it takes to write and debug the additional software, but will presumably reap much greater rewards when the technicians use your feature to speed the product through production test. Perhaps removing that same code in high volume applications will save an extra EPROM and reduce manufacturing costs.
Toolchains are a favorite topic of mine. All tool vendors claim that their products will reduce development time and therefore increase profits. The only reason to spend one nickel on tools is to be more productive - a tool that does not fulfill this promise is a waste of time and money.
I find far too many engineers stay locked into a set of tools because of laziness ("uh, I don't know C so I use assembly only") or by their often-incorrect perception that management will not spend money on engineering software and workstations. Presumably you, gentle reader, are, by the very fact you are reading this magazine, not one of these.
One company I'm familiar with is owned entirely by its hundred or so employees. It's interesting to debate the merits of upgrading their tools with a group of engineers from this firm. Never before have I seen such a keen analysis of the effect of buying capital equipment versus additional revenues from getting to market quicker. Of course, they had little interest in reducing labor costs...
You need good tools to be efficient (i.e., to engineer the product most profitably), but do you really need cutting edge, state of the art gilliwigs? How is the most intellectually honest engineer to decide between features and cost? Can any vendor prove, or even draw a weak correlation, between using their latest fancy tool and getting the product out the door faster and cheaper?
In most cases I doubt that any analysis, no matter how careful, will give numbers that mean a lot. Look at the CBO's deficit studies Vs those of the Republicans, Vs those of the Democrats. Some problems are simply intractable, whether we're discussing the federal budget or software development tools. However, despite the fact that an mathematically provable correct decision is not possible, you still must make a decision.
Sure, you can debug code with nothing more than an EPROM programmer and a lot of hope. Perhaps, if the code is only 50-100 lines, this is even the best approach. At some point you'll want a disassembling logic analyzer, at the very least, to find problem areas.
I come to the conclusion that since no one really knows the tradeoff between spending capital dollars and generating profits, it makes sense to use industry-standard conventions in toolchains - that is, to do what most people do. Sure, occasionally it makes a lot of sense to break free and do your own thing. Many outrageous new ideas come along that way. However, most people do best by following in the tracks of their peers.
Since the industry seems to take a leap to new conventions occasionally (like from assembly to C), and then plateau for years or decades using that technology, I recommend that when in doubt exploit that industry plateau.
Can you do embedded cross development on an 8088-based DOS machine? Sure, but a 386 is quicker and will make you more productive - it will generate more profits. The 386 might give a Norton SI 50 times that of the 8088, but I doubt you'll be fifty times as productive. Scaling up to a 486 or Pentium class machine should bring even more efficiency and higher profits - right?
Decisions are rarely so easy. Other factors are important. Buy a 386 today, and you may not have the horsepower to run tomorrow's applications. Though we may not be able to measure a qualitative productivity increase by buying a 486 over a 386, we do know, based on the history of the computer world (an industry-convention, so to speak), that today's computers will not be adequate for tomorrow's applications. The 486 or Pentium costs little more than a 386, but will be usable long after the 386 is consigned to the scrap heap.
Actually, it's interesting to see the minimum requirements for serious development work today. Many companies all but require you to have a CD-ROM drive to get access to the documentation, or to even install the software. Though most anything runs in 8 Mb of RAM now, it's pretty clear that 16 Mb or more will be needed in the near future for the more advanced OSs that are coming. Remember when an 80 Mb disk drive seemed huge?
I also firmly believe that you should pick your development language purely on business issues. Use Ada if the customer mandates it, not for any technical reason, but because it's part of the contract. It's always good business to keep the customer happy! Use PL/M if, and only if, you must maintain a huge investment in old code.
I'm thrilled and amazed to see how in a matter of only a few years C has overtaken assembly as the language of choice for embedded systems, yet even now too many programmers continue to work in assembly simply because they know and love it. Love has no place in business. Damn shame, that. There are certainly many applications that demand assembly for raw horsepower reasons, but these are fewer each day. Even the lowly 8051 has fantastic C compilers that can rival a competent assembly programmer.
Again, no honest person can say that using C will increase productivity (and thus profits) by a factor of 2, 4 or 5.3. We do know, though, that industry convention shows in most cases C programs are completed significantly faster and more accurately than those in assembly. This may not have been true even just a few years ago when compilers were not up to par, but it certainly is today.
The only reason to avoid C for an embedded application is because of size and speed issues. Using assembly can cure these, but you'll pay a heavy penalty in extra programming time - again, lost profits. Just assuming that C will be too clunky is a bad technical and business decision. Run experiments. See how big the runtime package really is. Write code and try to get an idea of the incremental size of the compiled object per line of C. Today even interrupt service routines are regularly coded in high level languages - write some code and time it!
If you are caught in an intellectual backwater you are costing your company money. It's important to stay up to snuff on all of the latest technology, since today's whiz bang idea will be the base level of competition tomorrow. Any engineer who stops aggressively learning and experimenting condemns himself to technical obscurity.
Now other languages are coming Forth (yeah, that's a bad pun). C++, while still not all that big in the embedded world, will surely be important in the next couple of years. I'm particularly impressed by Microsoft's Visual Basic. Less than an hour after opening the package I could build real Windows applications with buttons, dialog boxes, and the like. Though it's not for embedded applications, I do think VB gives us a hint of the things to come in all environments.
NRE costs (Non Recurring Engineering) are the bane of most technology managers' lives. NRE is that cost associated with developing a product. Its converse is the Cost of Goods Sold (COGS), a.k.a. Recurring Costs.
NRE costs are amortized over the life of a product in fact or in reality. Mature companies carefully compute the amount of engineering in a the product - a car, for instance, may have $1000 per unit just in tooling costs. We smaller technology companies often act like cowboys and figure that NRE is just the cost of doing business, but if we are profitable then the price of the product somehow reflects all engineering expenses. An increase in NRE drives up the product's price (most likely making it less competitive and thus reducing profits), or directly reduces profits.
While NRE is often a significant percentage of recurring costs, it seems that it is all but out of control. We hear constantly about staggering cost overruns on everything from defense systems to electronic switches for telephone exchanges, yet all too often overlook our impact as engineers on our own company's much less publicized product. All of us make dumb decisions in our careers about technical/business tradeoffs; what is unforgivable is dumb decisions not corrected by learning from the mistakes. Rumor has it that Tom Watson of IBM confronted a fearful manager who just made a $5 million error; Watson proclaimed he couldn't afford to fire the tooth-chattering individual because he had just invested $5 million in his education!
How many times have you seen someone write their own windowing system for a DOS application instead of buying one off-the-shelf? Or, if you use the 196 processor did you know that Intel has tons of software already written for it - free for the asking? I remember writing a reentrant floating point package years ago for a Z80 system... only to later find one commercially available fairly cheaply. Dumb, huh?
Perhaps there is a fundamental flaw in the philosophy behind OOP - maybe no one really wants to recycle other people's code. This Not Invented Here syndrome is the worst form of NRE inflation.
NRE decisions are based on engineering time versus dollars per production unit. An electronic greeting card selling for $1 in quantities of millions needs, and can afford, lots of engineering to get the costs down. Ditto for automobiles, where engineers slave over wiring diagrams to remove a foot of expensive copper from the harness. Sometimes, though, we expend the same level of effort on a project where a little extra recurring cost saves a sea of NRE.
For example, if you are making one of something, use the maximum amount of computer power possible. Who wants to spend six months cramming code into an 8051 in assembly when you could use a 486 and write the application in Basic in a weekend? The extra cost of buying a big DOS machine is nothing compared to your salary.
By the same token, try to use off-the shelf boards in low volume applications. Designing an embedded controller might not be all that hard, but engineering time and PC layout costs will be substantial for even the tiniest board. Analyze the relative cost of just buying a VME or other computer board. Answer the question: "if it costs $xxx to design a proprietary system, is this more or less than the cost of $yyy commercial hardware when we're only building a handful of systems?"
Sometimes it's easy to figure the tradeoff between NRE and COGS. You should also consider the extra complication of opportunity costs - "if I do this, than what is the cost of not doing that?" As a young engineer I realized that we could save about $5000 a year by changing from EPROMS to masked ROMs. I prepared a careful analysis and presented it to my boss, whose instantly turned it down as making the change would shut down my other engineering for some time. In this case we had a tremendous backlog of projects, any of which could yield more revenue than the measly $5k saved. In effect, my boss's message was "you are more valuable than what we pay you". (That's what drives entrepreneurs into business - the hope they can get the extra money into their own pockets!).
Many attempts have been made to quantify the impact on profits of late engineering. Ron Kmetovicz writes a short article in each issue of Electronic Design about time-to-market issues that is often interesting. His conclusion: late engineering is much, much more expensive in the long run than the cost of hiring extra engineers or buying capital equipment.
Even the lowest firmware engineer on the totem pole daily makes decisions that dramatically effect the company's profitability. Will your part of the system be delivered on time? While events beyond ones control might negate even heroic efforts to save a late project, all too often it comes down to a lifestyle choice: is saving the project, the company's profits, perhaps even your job, worth putting in the extra (perhaps unpaid) hours required? Each of us daily decide whether maintaining a particular lifestyle (e.g., seeing the kids for dinner) is more or less important than the project at hand.
No one should have to work crazy hours all the time. Companies that demand this are sweatshops and operate on the gray edge of immorality. However, occasionally those heroic efforts are indeed necessary. I do have a beef with companies and people who diddle away the hours, guaranteeing that only a crash effort at the end will ensure on time delivery. It's shocking how much time is lost to extracurricular activities. Recently an entire lab I was visiting was shut down for 35 minutes while an employee sold girl scout cookies to one and all. Somehow, in these days of downsizing and economic woes, this seems like petty thievery.
Career books urge you to understand the perspective of your boss, and to achieve that which is important to her. This is certainly wise, but never lose sight of the objective of making a profit. The fast track folks who rise from technical management into the high bucks corporate jobs are those who make money for the company. Satisfy their goals, and your career is assured. Avoiding decisions, or making stupid ones with only short-term benefits, is a slow form of suicide.
But, have fun! IMAGINE, an entry in the upcoming singlehanded around the world race, needs a skipper. Now, if I can just get a year's vacation...