Software Management Lessons from the 1960s

 4 months ago 644 views
Presented by Larry Garfield (@crell)

November 14, 2019

Larry Garfield has been building websites since he was a sophomore in high school, which is longer ago than he'd like to admit. Larry was an active Drupal contributor and consultant for over a decade, and led the Drupal 8 Web Services initiative that helped transform Drupal into a modern PHP platform.

Larry is Director of Developer Experience at Platform.sh, a leading continuous deployment cloud hosting company. He is also a member of the PHP-FIG Core Committee.


“The Mythical Man-Month” is one of the seminal books in the field of software project management. It was written in 1975, based on experience from the 1960s. Is it even still relevant?

Turns out, it is. Technology may have changed dramatically but people have not. Managing software projects is about managing people, not bits, and creative people engaged in intellectual endeavors are notoriously hard to predict and manage. (Just ask my project manager.)

Fortunately, many of the lessons-learned Brooks’ presents are still relevant today. Some are directly applicable (“adding people to a late project makes it later”) while others are valid with a little interpretation. Still others fly in the face of conventional wisdom. What can we learn from that?

This session will present a modern overview of the ideas presented by Brooks and a look at what we can still learn from them even today.

About Larry
Larry holds a Master’s degree in Computer Science from DePaul University. He blogs at both platform.sh/ and www.garfieldtech.com.

Transcription (beta):

[Inaudible] this presentation is brought to you by RingCentral developers. Revolutionize the way your business communicates with RingCentral's API is for voice, SMS, team messaging, meetings and facts. Don't just be a developer, be a game changer with RingCentral developers.

So thank you everyone. Those of you who are here and those of you who were watching this later we're gonna be talking about software management lessons from the 1960s. My name is Larry Garfield. You may know me online as Krell. If you want to make fun of me while watching this talk on Twitter, that's what you do. So I highly encourage it. I'm the director of developer experience at platform sh or a continuous deployments cloud hosting company. I'm a longtime member of the PHP fig core committee. I've been involved with fig for a long time and for those who get the joke, I do implement PSR eight. So those who don't get the joke, you can ignore that part. I'm just friendly.

This is a computer. This is a very old computer. Specifically the is is an IBM system three 60 model 20. System three 60 was originally announced in 1964 and was the world's first complete range of compatible computers. What does that mean? It means that was the first time you had multiple models of computer that could run the same software that actually had the same basic design to them. This was a very novel concept in 1964 and required for the very first time a distinction between computer architecture and computer implementation is the first time that you really need to separate those two. So this doesn't three 60 took two years to build and then shipped for over the next decade or so before being supplanted by later versions. I was incredibly successful in particular this distinction between architecture and implementation. Allow to do things like support reel to reel tape for data storage and also this really crazy newfangled concept called spinning disc storage. We had a device, you know, only about the size of her refrigerator that could store thousands of bits of information. I mean, this is just phenomenal new technology. Things are moving along at this point.

And I said, chip for a number of years and I left a huge legacy behind in the computer industry. For example, why are bytes it's in size because they were eight bits in size on system three 60. There's no reason why a bite has to be eight bits in size. In fact, they used to very different computers would have a different font size of bite. For example, one of those common before systems three 60 was six bits. So you'd have a six bit bite, which incidentally is not enough characters to store both uppercase and lowercase letters with all the control characters you still need. So the research team studied that and said, all right, if we can only support the lower case or upper case, which one do we support? And they found that lowercase, if you can only do one or the other, all lowercase is in fact easier to read.

So they took that to their boss, to their manager and said, so we should make our computer. You run with all lowercase characters. And their boss said, but we're IBM. We can't have our name in lower case. And so all upper case was the standard for the next several decades. That's why, but it says in three 60 gives us an eight bit byte, which is enough space for upper and lower case letters. It also originated the idea of byte addressable memory. You cannot access memory by a bit offset only by a byte offsets that comes from system three 60 another thing system three 60 was known for was the UB Siddiq character encoding, which you have probably not heard of because it died off and everyone used ASCII instead, which has now been supplanted by Unicode. So not everything from system three 60 lives on, but system three 60 itself still lives on as the IBM Z series of mainframe computers, which are still compatible for many of the old programs from system three 60 there is still code. You can run on these things that is 50 years old and still works. Yes, that's amazing. And these also will run PHP. So there's that.

The lead team for system three 60 was these two men, gene Amdahl who was the lead architect. And Fred Brooks, who was lead manager. Brooks is actually the person who coined the term computer architecture in the first place. After the project was over, the team wandered off to their own future endeavors and many of them ended up in academia like you do and 10 years later in 1975 Brooks wrote a book called the mythical man month, which is a collection of essays and lessons learned based on his experience working on system three 60 as well as academic study into software over the intervening decade. In 1995 he released a anniversary edition which contained all the same material as well as a couple of new chapters in which he went back over the his original statements to ask, all right, did I get it right? Is it still true? And his conclusion was, yeah, mostly. I mean, some of the implementation details were a little different, but managing large scale software projects hasn't really changed because humans haven't changed. The technology has changed, but people don't really change. So it's now been another 20 years and change, no pun intended. What I'd like to do is go back and ask the question, is it still valid? Are the points that he made in the 70s about work in the 60s still valid today in not quite 2020

And I'm gonna make the argument that most of them are not going to follow a somewhat different outline here from Brooks. So we're going to go through things a little bit differently. I'm not going to cover everything he covers. I am going to pick on a couple of open source projects here. I could speak the same statements about all of them. These are just convenient ones to pick on, but don't take it a slam against those projects in particular. And one thing I will call out stead he did not get right is gratuitously excessive use of male terminology. Despite the title of the book, there's absolutely nothing gendered in anything that we're talking about here.

You've probably heard of the phrase Brooks lobby for adding people to a late software project. Makes it later. Why we've heard this line. But why is this the case? Well, fundamentally it's because communication is hard. Communication is very hard and communication gets harder. The more people you have. In fact, there's even math for that. When you have two people, two that you need to communicate between, you have one pathway of communication. If you have three people, you have three lines of communication. If you have five people, you have 10 lines of communication. The number of lines of communication goes up faster than the number of people. It's not exponential. It's actually competent tricks. Ignore the math. That doesn't matter. Point being that's more people means more communication channels to have to coordinate. And the more coordination you have to do, the more you have effort you have to spend keeping everyone on the same page, the harder it is and the slower things move.

Why do they need to communicate? Why didn't the coordinate? Because tasks are not paralyzable. Most tasks and developments are not truly paralyzable. This is a management talk. So I have to have a Dilbert cartoon in it. I heard all of you because the project will take 310 days to complete the 300 so I want you to finish by five o'clock and clean up your desks. You're all fired. That's totally how it works, right? Of course not. Why doesn't it work this way? Well, in some cases, because resources are limited, you have only so many desks. You have only so many computers. You have only so many tests, servers, you have only so many printers. People are at different skill level. If you're working on a system, a web application, you need SQL people. You need PHP people, you need CSS people in HTML people, you need people first in a particular framework.

You need people who are versed in rest API. These are all different skills and you may have five really great PHP developers, but if you only have one great CSS developer, which is a different skillset, that part's going to be slower and you can't just throw those PHP people at the CSS and expect it to work. They may also know CSS, but it's a different skillset and they're cross trained in multiple skills. But not necessarily. Also some tasks and can block others. If you have, you know, you can't build the front end to the database until you know what the database looks like. You really can't theme, you can't write the CSS for something when you don't know what the design is going to be. What data do you even have designing CSS around data you don't have yet is a waste of time. Sometimes you have to do things in a certain order.

It's just not going to work otherwise. This is true even for unskilled tasks. A number of years ago I live in Chicago and I was helping friends pack up and move from Chicago to New York and they hired a moving van and brought their friends over and ordered pizza and had everyone help them pack the van and if they have say a hundred boxes, that means all they need is a hundred friends and we'd be done in three minutes. No, of course it doesn't work that way. Why? Because we only had one elevator. We only had $2. There's a limited number of limited on space in their apartments to pack up boxes.

The person packing the downstairs packing, the moving van needs to be someone who's going to be there at the other end to unpack it so they know which things are fragile and which things are a topic, what other things in which things are leaning kind of awkwardly, so just throwing warm bodies at the problem isn't going to work even for a fairly unskilled tasks like loading a moving van. One of the things we never quite figured out in this process though is why anyone would want to move from Chicago, Chicago to New York. So that's neither here nor there. Fundamentally though people and months are interchangeable commodities only when a task can be partitioned among many workers with no communication among them. As soon as you have to coordinate, as soon as you have to communicate with someone else, you have a dependency and that slows you down more.

Dilbert of course. How long will this project take if I add two people? Well at one month for training, one month the extra complexity and one month to deal with their drama because humans and humans bring drama because humans now sometimes you can get away with adding people to a project depending on how early and how many swim lanes. A project has swim lanes. It's a concept from agile. It roughly translates to the number of parallel lines of work you can have without people stepping all over each other in the code so you don't want three different people making changes that are all going to affect the same file at the same time because you're going to spend more time sorting out good conflicts than you do actually work. Getting work done. Different projects will have a natural number of swim lanes and that's going to vary depending on the project. It could be one, it could be two or three, might be as many as four or five, but usually not.

The number of developers you want on a project I argue is the number of swim lanes plus one about plus one because that gives you a extra capacity for code review for fixing bugs, for someone going on vacation, someone getting sick and and so on. And we're talking about the number of developers here. That doesn't count your project manager or your designer or people like that. This applies not just to people but to the technology as well. You might have heard of [inaudible] law. This is the same gene on ball we mentioned before from system three 60 which gives the theoretical speed up and latency of the execution of a task at fixed workload that can be expected of a system whose resources are improved in plain English. It's the formula for how much speed up you can expect to get from a problem. If you throw more resources at it.

Don't worry about the specific math, but what you can see here, the more parallel the problem space, the more resources you can throw at something before it stops to be helped. Stops being helpful. And yes, more parallel problems. You can add resources to it. You can add processors to, you can add people to, but there's a point of diminishing returns. Even if a task is 95% paralyzable at some point, adding more resources to it, adding more warm bodies is not going to actually improve things at that point. It's just going to take the amount of time it takes and there's nothing you can do to speed it up that isn't going to actually slow you down instead.

So that's Brooke's argument. I think it's true. I'm going to say yes. I bet this completely matches my experience. I mentioned before, I'm going to a different order here than Brooks does. In fact, I'm only covering about half the book. When I was first putting this talk together, I was whining on Twitter that there's so much good material here. I could easily have two talks off of this one book. And so someone who helpfully pointed out that I should just get a second speaker on stage and give both halves of the talk simultaneously, then I can fit in as much material because that's totally gonna work, right?

Of course. This means that throwing people out a problem is not actually a good way to solve a resource problem. Do we know any open source projects that like to solve problems by just throwing people at it and trying to convince all kinds of new people to just get involved for the sake of getting involved and work on something? Can you name one how many people worked on Drupal eight this is as of March of this year, over 5,000 how many lines of communication is that? What should we learn from this next point? Why are estimates always wrong? I don't know about you. I hate doing estimates. Most people hate doing estimates. Most people are terrible at doing estimates. Why are people terrible at doing estimates? That's well Brooks argues. Part of that is because the thing you're estimating is not the thing. You should be estimating. What you're probably estimating is the times you write the program, but the program is not actually helpful.

If you want that program to be a good programming system, that means something that is general, that has good solid interfaces you can develop against that. You can build off of that. You can build things on top of that. Takes three times as much work as the initial prototype to make it stable and give yourself clean, well thought out interfaces. Do you want to take this program and make it a product, make it generalized, a flexible, configurable, properly tested, right? The good documentation for it. Factoring maintenance time, you're looking at three times the work of the initial proof of concept, which means if you want a programming system product, which means that is a well fleshed out system with good interfaces and configuration and good test coverage and documentation. You're looking at nine times as much work as the initial estimate as just writing the program.

Writing the happy path is easy. Writing the, you know, fast or odd projects that's bootstrap things and so on. That's just a top left quadrant. That's the easy part. You're looking at nine times as much work to turn that into an actual programming systems product you can leverage and build off of works. Also notes here, that's a substantial bank of test cases. Exploring the input range and probing its boundaries must be prepared. Run and recorded. Translation, automated testing was already a thing in 1975 if you're not already doing automated testing, you are literally decades behind finding the curve. Now, one thing to note here, we're not saying the tests take three times as long. We're saying that fixing all the bugs that tests find takes three times as long. That is why you want to test first because you then don't introduce bugs in the first place.

Another point though, how fast can you code? Think about that for a moment. How fast can you produce code? Whatever you're thinking you're wrong because I have not told you how much code you already have. Extrapolation of times for 100 yard dash shows that a man can run a mile and under three minutes. The current world record is three minutes, 43 seconds. You cannot keep up that speed for as long. The F, the speed you can run for a hundred meters is not what you can run for a mile or even a marathon. You can't sprint a marathon.

In fact, there's been studies that show that the effort to write more code grows exponentially with the number of existing instructions you have to add to graphically. It looks like this. The more code you already have, the longer it takes to add more code to it, and this is true. Even if there's no communication, even if you don't have another person to coordinate with, if you're a solo developer, the bigger your existing code base, the harder it is to add code to it because even if you are not communicating with other people, the code is communicating with other parts of the code. It is more inter internal interaction points, which means more things that can break. If you touch something, it's also limits of human memory. There's only so much of a program you can conceptually keep in your head at once and your own active memory, literal active memory. And if the program is bigger than that, then you slow down as you take time to remember what you were doing six months ago and this piece of code and trying to figure out if this is going to affect some other piece of code.

Another study found that the number of instructions that a person can write in a year goes down dramatically with a number of interactions within the code. I'm not quite sure how they qualified very few or some or many here, but you'll notice it's a highly coupled a system with lots of interpersonal communication and a system with for very little internal interaction and little interaction with other people. It's nearly an order of magnitude difference. If you have a system where you don't have to talk to people and there's no existing code base, you can be 10 times faster than if you actually have existing code and existing people you have to work with. And when we say large here, what does that mean? In this particular study it meant 25 programmers and 30,000 instructions. Now, the clever listener is probably at this point saying, but instructions, what are you talking about with instructions?

This study was different and was originally done for assembly, but later studies found that [inaudible], which is kind of the the C of the 1970s it just, the common language before [inaudible] took over follows the same curve for the number of statements instead of the number of instructions that it compiles to. What does this tell us? This tells us that a more expressive language that gives you more power per statements does improve your productivity. If you can get more expressiveness, if you can give more functionality in fewer lines of code, that does give you a more maintainable program that is easier to write and faster to write, but you still run into the same problem. Larger code base means adding code to it is harder and slower. What do you think is Brooks right? I'm going to say yes. He's two for two. Of course. We also said they're large men, 25 programmers and 30,000 instructions. How large are major open source projects? 2,100 and Oh,

How many lines of code are in symphony and take a quick death. It's a lot more than 30,000 what can we learn from this? Let's talk about planning and documentation, which are the same thing. You should be writing your documentation up front. Why do you write your documentation upfront? Because only one one writes do the gaps appear and inconsistencies protrude. The act of writing turns out to require hundreds of many decisions and as the existence of these that distinguished clear exact policies from fuzzy ones, when you actually have to sit down and describe in English or your other native language what it is you're trying to accomplish, you're basically rubber ducking with your word processor and in so doing you're flushing out all those extra little fiddly detail bells around the edges.

[Inaudible]

The crucial task is to get the product defined. Many failures, concern exactly those aspects that were never quite specified. When I was a consultant, I ran into this problem all the time. We would go do an onsite with a customer and talk to their users, talk to their managers. I mean spend several days with them figuring out what they want to do, how they want to do it, come home and I started writing up a report of what it was we had talked about and knew exactly what we were doing and then I run into, Oh wait, what about this thing? Now that I'm describing it, what happens when this value is zero? What happens when there are no news items? What happens when there is too many news items? What happens in all these other edge cases that we didn't think of but now that I'm describing it back to you, I have to actually think about this is how documentation is designed tool because it forces you to think through what happens in this case, what happens in that case?

What happens if I do this thing and you want to think through what happens in these edge cases so that the edge doesn't mean edge of a cliff. That said, the manual written specification is a necessary tool but not a sufficient one. Describing what happens ultimately is the codes develop. The code is the most precise definition of what the code does, but it's not always very readable and an English description of what the code does is not necessarily helpful. It needs the why, the why something happens. The why something is done a certain way is crucial and that is something code itself can never capture. You must write documentation for that. Incidentally, if anyone tells you that good code doesn't require documentation, ask them why they're doing something a certain way and they're not allowed to explain it without using comments. You'll find they produce comments very quickly because that's what comments are for.

Brooks argues that to get this right, you need top down design. What does the top down design mean? It means architect the system first, figure out what it is you're going to build and how and refine it's top down. It's to produce modules, break the problem up into pieces and then solve those pieces individually. Do that recursively and to have a picture of how each part of the system fits together. Then you can implement each individual module, test each one separately, please, and then inform back up as you go. So if you run into a problem, okay, that means you may need to change the architecture. You cycle back some iterations and keep going. This is your basic divide and conquer strategy. Divide and conquer is the fundamental strategy for basically anything in science. If you have a hard problem, break it into multiple small problems until those small problems become easy and then reassemble them and to solve your original problem.

Burke's also notes that you should plan to throw it away. You will anyway. Why? Because you don't know. What do you need to know? When you start writing agile talks about this a lot. The early in early year on the project, the less you actually know about the right way to do something. The way to find out the right way to solve a problem is to solve it the wrong way and then realize what you should have done. You don't know what you don't know until you built it wrong. You may have heard the old joke that a Microsoft always takes three versions to get things right. It's not just Microsoft. Everyone is like that. Microsoft just got the flack for it at some point he year, I'm sure someone in the audience is thinking, but Larry, you're talking about waterfall. We all know that's terrible.

What? How can you say that? And someone did say that some Brooks who in the second edition in 95 said, actually yeah, that is kind of waterfall ish. So let's not do that. Let's revise instead designed top-down but implements iteratively. That means you start with an end to end skeleton. That is not your whole system. It is not a complete working system, but it has the outline of everything in place. Most of it is stubbed out and doesn't do anything, but you have a system that is end-to-end via cable and does something and compiles and then always make sure you have a releasable compilable system. This may take a long time to get to building that initial skeleton could take half your project. That's okay. You're still building out the the skeleton of the system. So that you can then grow modules in place. Take the stubbed out pieces one by one and replace them with more functional components that actually do what they're supposed to do and that means along the way, you just throw away of a module and replace it with something else.

You rewrite it entirely. You replace it with some third party library. Cool. You can do that. Guess what? This is refactoring. Refactoring is the art of throwing your first, your first version away a little bit at a time instead of all at once. That is all refactoring is, but Brooks argues this approach. Filaments necessitates top down design because it is a top down growing of the software. You need to know what the structure of your system is so that you can grow the functionality onto it. If that structure is wrong, you're going to be growing the wrong thing. You also know it's the common sense if not common. Practice dictates that once you begin system debugging, only after the pieces seem to work in translation unit tests, they were thing architect top-down but debug bottom up and you meet in the middle with a stable system.

Now this same person who keeps relaxing me, well at this point most likely be saying, but Larry, you just described agile, didn't you? I mean we've all seen this chart before, right? Who thinks this is a good way to build software? No bad, this is throwing the whole thing away twice. When's the last time you were able to evolve a skateboard into a bicycle or a bicycle into a car? It doesn't actually work that way. If what you need to build as a car and you know what you need to build is a car starting with a skateboard will not help you. It will just waste your time with something that is not actually useful. It's a different product for a different use case. Instead, you want to do it like this. Start with a skeleton that's supports your end goal. It has a frame, it has four wheels, it has a steering wheel.

That's the structure we're working with. It may not have much else, but that's the structure we start with and then you can add pieces as needed. You can add a trunk, you can add doors, you can swap out the doors if you need to. You can change the pink color, you can change the hubcaps, but you're still fundamentally dealing with a car all the way and that first version will still run. There's already an engine there but you're not starting with something completely different. Your MVP needs to still be approximately the structure of your end system and this also means you do not in fact build the most user important feature first. You start with a foundation. You start with this, the skeleton, you start with the frame. If you're building a house for somebody and they tell you the most important feature for this house is to have a fireplace on the second floor. Bedroom is a fireplace on the second floor bedroom. The first thing you build, of course not, there is no second floor and there is no bedroom. The first thing you do is pour concrete for the foundation because if you don't, you will never get a second floor bedroom in which to put a fireplace.

But I think it's still true. It's Brooks, right on this one. Yup. I'll give him two and a half out of three so far because of, you know the, the revision he had there.

All right,

And that's the 86 Brooks gave a presentation at a conference where he was accepting some kind of award. I don't recall what entitled no silver bullet, which also ended up then in the second edition of mythical man month in which he draws a distinction between essential complexity and accidental complexity. Essential complexity is pro is complexity that's there because the problem space itself is hard. If you're doing an eCommerce site, you know, e-commerce is hard. It's not because the tools are bad, it's because tax law is complicated and tax law is complicated and shipping logistics are complicated. Therefore doing eCommerce is complicated. Accidental complexity is the tools are hard to use something and the tools are getting in the way. There are certainly e-commerce systems that are terrible and make life difficult, but even if you're using one of the good ones, it's still a hard problem space. We can do something about accidental complexity. We really can't do much about essential complexity.

The hard part of building software is a specification design and testing of the conceptual construct, not the labor of representing it and testing the fidelity of the representation. Remember when Brooks started working in the 60s, the way you debugged your code was you took your stack of punch cards down the hall, got into the elevator, went down to the basement walked down the hall again to the room with the computer handed that your stack of punch cards to the computer operator and back to your office. Came back two hours later and the computer operator hands you back the stacks cards and says, there's an error on card 427 good luck. And you walk back up to your office and stare at card 427 for a while and try to figure out what you did wrong. This is accidental complexity. It's not about the of the problem, it's just dealing with punch cards is a pain in the button.

We are well past that. We have gotten rid of most of the low hanging fruit in accidental complexity. We have realtime debuggers running on our own laptops that everyone has their own system and good ideas that do auto completion and syntax highlighting and LinkedIn, all of these other things that eliminate most of the incidental accidental complexity. And even in 1980 sixth Brooks was saying there is no single development, which by itself promises even one order of magnitude in productivity for reliability or simplicity. Room for improvement. Yes with. There has been steady improvement in productivity since the eighties better languages, better paradigms, better tooling and so forth, but we're not going to get the kind of tenfold improvement that we used to get with new tools because we've just gotten rid of all of those easy problems. We've already dealt with those. So how do we become more productive? How do we become better and more efficient and faster at our jobs? Well, we have to attack that essential complexity and Brooks offers a number of possibilities here. One is rapid prototyping where we grow organically based on user feedback. You grow the system into place. We already talked about this. This is the iterative approach, refactoring approach. So we're not going to talk about this again. He also talks about buy versus build and the need to mentor better architects.

He notes that the most radical possible solution for constructing software is to not constructed at all for call and it's sixties and seventies computer hardware could easily cost $20 million. If you're spending $20 million on computer hardware, custom writing, all of your own software for $50,000, that's a rounding error. Your accountant won't even notice that cost. Now you can get a fully capable computer with storage memory, CPU, everything, just plugging out a lap, a, a keyboard and a monitor for 10 bucks. The hardware is not the cost anymore. The hardware is almost free. Software is expensive to write because software requires humans and humans have not gotten any cheaper. So instead what we need to do is reuse our tools. And components really? Why would you write your own spreadsheet when there's already with those out there that you can just buy or download and use? Why would you write your own word processor when there's ones out there already you can buy or download and you use, why would your eyes, your own web server? Why would you write your own HTTP clients? There are plenty out there you can just use already.

This is how you get more efficient. This is how you become more productive and be basically just described open source in the 80s when free software was just starting to become a thing as a concept but he was predicting it in the 1980s the vast library of easily downloadable code that we can collectively use and share and improve on that is open source is the number one productivity improvements of the last 30 years. If you're programming system with your application is not at least 80% open source code, you are wasting money. You are throwing money away because it's out there. Why are you rewriting something you don't need to rewrite. Fundamentally the way to be more productive is to write less code, and the way to do that is to reuse more code.

Let's talk about the other idea. They're growing great designers. Fundamentally, Brooke says, software construction is a creative process and so we need to treat it as a creative process and look to creative industries, not logic industries for how to address software. Study after study shows that the very best designers produce structures that are faster, smaller, simpler, cleaner, and produced with less effort. The difference between the great and the average is an order of magnitude. Oh God. Tedx developer, Oh, discrimination. Gatekeeping. No 10 X designer, 10 X designs. If you have the right architecture, it doesn't mean you're going to produce code 10 times faster. No developer is going to produce code 10 times faster than your average developer, at least not if it's code that actually works. However, a good architecture, a good design will let you get away with writing 10 times less code in the first place, which also means 10 times fewer lines of code to have bugs team, 10 times less code that interacts with youth that you didn't need to have to debug.

Like we've talked about before, software architecture conceptually has more in common with graphic design than engineering. Architecture is a completely separate skillset from programming and Brooks argues it needs recognition on an equal level with management. Your senior architect and your senior management are peers. They have to be. If they're not, if the managers are still treating architecture as something dominions do, you are not going to have a good architecture because it is a senior level advanced skill. It's how do we get people with that skill to mentoring, identify good candidates early. They may not be the best programmers that may not be the most experienced programmers. Frequently someone who is a decent to good programmer will make an excellent architect and someone who is a fantastic programmer may not be the best architect. I'll be perfectly honest. I'm a better architect than I am a programmer.

I can program just fine, but there are plenty of people who can toss around algorithms and optimizations way better than I can. Architecture is where my skill set is. Once you find those people, give them a career mentor, a real formal mentor, not just someone they hang out with, give them a real mentor. You have them. The ability to go apprentice with somebody, give them formal education training in software architecture, which is not the same thing as software engineering. That's what I'm saying. Courses get them solo works. They can try out architecting smaller projects to get get their feet wet and make some mistakes and learn from those in ways that are not going to bankrupt your whole company. Encourage them to collaborate with other designers. Note here we've shifted from architect to designer as a term. Let them take classes with nomad. Let them go to conferences, have them work with architects at other companies or your own company or both. These are all things that the design world already does. This is how the graphic design industry, this is how the product design industry, this is how the industrial design industry works. The software design industry needs to do the same because it is the same problem. Space is the same kind of thinking.

What do you think? Is he right? I'm going to say absolutely. He definitely was right about open source and I in my experience over the years definitely having architects who are more like designers and less like programmers who have management director level and recognition. Absolutely critical. And finally let's talk about conceptual integrity is easily the core point of the entire book. The conceptual integrity of the product as perceived by the user is the most important factor in ease of use as perceived by the user. Why? Because you want the user to have only one mental model for interacting with the system. You want them to have to think about only one set of downs, one set of mental patterns. That means that they have to spend less effort figuring out how to understand the system and they're not going to get confused by different paradigms, different workflows, different patterns that they have to keep track of.

In fact, Brooks Sates, ease of use is not simple scalar value. It is the relationship between functionality and conceptual complexity. You could have something that is conceptually complex, but if you get more functionality out of it, it's still just as easy to use relative to what you get out of it. For example, three D modeling software, like three, three D studio max LightWave and so forth. Insanely complex programs. You have to take classes to learn how to use these things effectively. You can't just self-teach on them very well, but the capability you get out of them is absolutely amazing. Remember, most of Marvel movies are in fact animated films. They just animated to look live action. That's all happening with three D modeling software and it's phenomenal what you can do, but it takes a lot of effort to learn on the other end. Compare a text editor, very little functionality. There's not much you can do with a text editor, but it takes about 30 seconds to learn how to do all the things that it can do. So does that mean a text editor is just as usable as three D modeling software? It's just as easy to use from one perspective yet? Yes, because the functionality you get for the mental complexity you have to understand is a similar ratio. Insert VI joke here.

Fundamentally though simplicity and straightforwardness proceed from conceptual integrity and conceptual integrity is the most important consideration in system design. It is better to have a consistent design than more functionality. See also the original iPhone, it lacked copy and paste. It lacked an app store. It lacked high speed internet. It lacked all kinds of things that people figured were fundamental needed features and it did just fine. Thank you very much because the few things it did, it did extraordinarily well, but conceptual integrity of the product not only makes it easier to use, it also makes it easier to build and less subject to bugs. If you've read anything about domain driven design, this is called ubiquitous language. Same idea. Different words to describe it. You want the developer to have only one mental model of the system just as much as the user has only one mental model of the system. Because if the developer has to shift their mindset between the way of doing things over here and the way of doing things over here and the nouns that are used over here and the nouns that are used over here, they're going to get it wrong. They're gonna screw up at some point and things aren't going to line up properly and things will be done in slightly different ways and slightly different places. And that's where subtle bugs come from.

So how do we get conceptual integrity through smart division of labor? Specifically Brooke says the design must proceed from one mind or a very small number of agreeing resident minds. Not motto, think but not conflict thing. There must be a clear coherent uniform vision for the entire system. And Brooks discusses two possible ways to get this. One is a surgical team does modeled on the way and actual hospital surgical team works or you have a surgeon working on the patient and she has focused exclusively on that and nothing else and there's someone's standing next to her to wipe her brow to get sweat away. There's someone to hand her tools that she needs to someone, another person to take tools away when she's done with them. There's someone else over there to keep the family, I mean the client away so that she can focus exclusively on that job.

I've never seen this actually work in practice, so let's not talk about that further. Instead, the other option is to split the architect from the implementer. These are diff print job descriptions. These are different roles that these are different responsibilities that these are different people. The architect is the user's agent. It is their job to bring professional and technical knowledge to bear and the unalloyed interest of the user as opposed to salesman, fabricator, managers and so on and so on. They are a user advocate. Their job is to call the shots and decide what is the ubiquitous language, what are the set of nouns we're going to use? What is the development pattern we're going to follow so that it is consistent internally and externally, which makes it easier for the development team to build an easier for the user to use.

Oh my God. But this is our Isocracy. But cathedral design, we want the air conditioner on the bizarre. We want a bizarre, we want, you know, if we emergent architecture, guess what? Cathedrals are still standing centuries later. Cathedrals stand for centuries and hind together and work because people follow a pattern. People follow on architecture. People follow the directions they were given. A Bazaar is great for a number of things, but you're never going to build a cathedral in an ad hoc. Everyone do their own thing kind of way. If you want something big and consistent, that will last. You need to plan it. You need top-down planning. You need top down decision making. Top-Down consistency.

See also Apple. Apple has a reputation for great products. They have a lot of designers there. There's huge number of designers, all of them quite good, but at the end of the day, Johnny ice is the one calling the shots. The design at Apple is Johnny eyes or was, who knows what's going to happen. Now that he's gone, look at Google, they have hundreds of product teams with designers on them, but at the end of the end of the day, Matteis Duarte is the one who's calling the shots and it is his design that matters. If you're doing anything with the Unix system, which means any Linux, you're following POSIX. That is the standard for how the system works. If you don't like it, well that's nice. You're working on a unit system. You follow that architecture or your code is not going to work correctly. It will not interoperate with other applications properly. It will break when you moved from one system to another. If you're working on the web, you're dealing with HTTP, you're dealing with web browsers. That is the standard. There is a standard for how caching works. If you don't like it, well that's nice. If you don't follow it properly, your caching will not work because that is the architecture and it doesn't work unless everyone follows the same architecture. So follow the architecture.

Fundamentally, an architect is responsible for designing the idea of watch, dials and hands. The implementer for building gears and bells, both of these can be challenging. Both of these can be fun and a good architecture supports many different implementations. The fundamental architecture of your wristwatch and big Ben are the same, but there's still a lot of challenge that goes into implementing those different versions of that same architecture. Now the architect needs to always be prepared to show an implementation for anything they are suggesting, but they can't dictate to the implementation. They have to be able to demonstrate. I'm not asking you to do anything you can't do that cannot be done. But once they've demonstrated it can be done the best way of doing it, that's up to the implementer to figure out. This also helps reduce channels of communication. You need ongoing cooperative communication.

There's a high degree of communication between the architect and the implementers, but not necessarily between every implementer. And every other implementer, so instead of having a web of communication channels, you have a tree of communication channels. You can divide and conquer. If the system is large enough, you break it up and have different sub architects for different parts of the system and those sub architects have that same level of control level of direction within their segment and the architects communicate with each other and the implementation teams communicate only for within their subsystem. This is how you build large systems. This is how Linux gets built. One of the largest software projects in the world has this model. It is a top down hierarchical, different people responsible for different parts and have final say in those parts. 99% of the time, not just emergent. Whoever feels like a contributing stuff. Architecture democratic architecture is called mud. I've seen this personally, it's called older versions of Drupal, but think about it now. Do we know when he systems that don't really have anyone in charge, don't have anyone really steering the ship and deciding what the architecture is going to be. That as a result have a lot of internal inconsistency that make it hard to work with and people constantly complained about that inconsistency because it makes it harder to work with or harder to understand and harder to learn. Can we name any of those?

What should we learn from this? Then we're talking about large systems here. How big is big? How big is it a large team that we need to break things up this way? Well, after teaching a software engineering laboratory more than 20 times, Brooks came to insist the teams as small as four people choose a manager and separate architect. Either of them can be the boss, but as team, as small as four manager, architect, two implementers because even at that size you benefit from having one brain driving the overall direction so that that overall direction is consistent and straight and not wavy and meandering all over the place. Again, it doesn't matter who the actual boss on the team is. It could be the manager, it could be the architect as long as you are all in agreement about who is the final authority, but there is still a final authority. What should we learn from all of this? What can we learn from this book? Software construction is fundamentally a creative process. It has more in common with design than it does with programming.

When in doubt, divide and conquer is your go to strategy for most things. Take a complex problem, break it up into individual pieces, solve those problems, put them back together. This lets you build a shareable programming system product, a good robust set of components which you should release as open source or leverage open source components wherever possible to save time and effort and also give more record estimates. This lets you do a couple of your live breweries from your framework so that you can swap them out individually if you need to so that you can grow them into place if you need to. That gives your system the ability to evolve over time. But to do that you need a top down design. You need a clear vision of what the system looks like, not just throwing code at it and let the architecture emerge on its own. That's not really a thing. And the best way to do that, to get that consistent coherent vision is to empower architects to make decisions and let them stick. Architects are the users' advocate. They are designers. They should definitely take input from the team. They're not operating just in an ivory tower, but at the end of the day, the architect's job is the keeper of the architecture, keeper of consistency, keeper of coherent design. And we have to give those people the authority to do that effectively. If we don't do, we end up with a big ball of mud.

Fundamentally idea that people knew a thing or two in the seventies is strange to a lot of young programmers. This is from Donald Knuth considered the father of the analysis of algorithms all throughout the book, the art of computer programming. What can we learn from this? Thank you. I do recommend everyone go check out the book. I don't get any commission on it, but you can get it. The mythical man month add most bookstores in print or digital versions. With that, thank you very much. And those who are here live, do you have any questions


Tags: management

SPONSORS

SPONSORS

SPONSORS