In today’s age of digital metrics and multi-channel engagement, the conventional ad is sometimes thought of as an ineffective dinosaur. This thinking comes from the experience of running ineffective ads and seeing nothing tractable from the effort -especially in the face of digital offerings that can at least send back some sort of metrics. Having worked with a number of clients to build advertisements over the years, I’ve learned that making an effective ad is something that’s a bit of a mystery for many.
To shed some light on effective ad development, I’m going to be doing a series of posts that speak to the more important aspects of ad design by using actual advertisements as examples.
Some examples will be shown for the good things, some will be for the not so good things. I’m going to try and mix up the kinds of example ads and the target markets they aim at. At the end of the day, it’s not to chastise the bad or the lost. Regardless of the sorts of products or consumers aimed at, the breakdown of these ads should help future ad creators in making their work a bit more impactful – and for the right reasons. The goal is always informative.
To summarize the core tenets of what makes a good ad, I’ve put together an entry-level list of the most important aspects. I’ll go into detail about these aspects in successive posts.
What makes a good ad:
It speaks the target audience’s language
It’s placed where the target audience will be
It has asks a specific action of the viewer
It has one specific and focused message
It’s designed to impart that message within a blink of an eye
It’s built to impart and support the brand positioning of the firm
While this series is designed to be primarily aimed at conventional advertising, by no means does it disqualify the salient points from being completely true and useful for the digital/social world. In fact, regardless of today’s abilities in A/B testing, availability of analytics or even healthy slatherings of machine learning or other buzz-friendly technologies, the core tenets here will still have much more impact and utility to crafting an ad that gets the customer results desired.
For a while the phrase, and then the button that leads to a form (that we all were confident would never be responded to), was working quite well. But technology eventually catches up to everyone and B2B is no different. Most suppliers and manufacturers have seen the benefit of merely having a presence on the internet. And they should, as studies have shown that even the B2B world sees 93.7% of purchases start with search. So what’s my issue with putting up a quote form rather than a cart?
The issue is that the B2B shopper is also a B2C shopper. If we all dig down, we all know it to be true. The B2B shopper (us) has had something like two decades of point-click-buy experience. It’s no longer a wish or desire, it’s a necessity for consumer-focused firms to be able to close that sale with as little friction as possible. With this in mind, I would posit that it’s almost impossible to close B2B stock product sales by hoping the customer picks up the phone instead of navigating to a site with a buy button.
The question becomes, “So why isn’t every B2B supplier ditching the quote button for the buy button – especially with stock components?”
One of the most lauded reasons is that the capability to quote parts rather than just price them is that it gives a black box aspect to the sale. This layer of obfuscation helps firms adjust prices to the scale, complexity – or even the sort of customer that’s inquiring. Small run? Well it costs virtually the same to service a small order than it does to do a big run. Therefore the short run costs more because the time could be spent on large orders. High complexity? It takes more brain power. The price goes up. A customer who has deep pockets or a short time frame? Price goes up. That flexibility in pricing is hard to give up, for sure.
Another is the perception of insurmountable complexity in setting up an online ordering system. The fear is that it requires a room full of people and a stack of servers humming along in the basement, killing the AC and sucking the life out of the bottom line.In reality, the barriers preventing companies from opening up ecommerce operations is the lowest it’s been ever. Personally, I have been able to go from bare web server to storefront in under two days. Granted, it’s no Amazon or McMaster Carr but if I can do it, anyone can.
Perhaps the scariest reason of them all is,”we’ve always done it this way.” Sadly, I’ve heard this more than I should, especially in the B2B world.
What’s the cost of all this fear? Lost sales. The B2B purchasing manager who has gotten used to buying all manner of products for themselves and others with a finger touch (and probably on their phone) knows that there’s a firm out there that has a site which will tell them the price. Of course, that site is also the one that has a ‘buy’ button so they can just get it now, not wait a few days to have a quote compiled, received, compared, agreements signed and so forth for an off-the-shelf-part.
Maybe the scariest fear should be the feeling of prospective buyers skipping over a quote site for the vendor where they can simply press ‘buy.’ What’s even scarier is that buyer will never find out a part is cheaper, faster to ship or better in some regard, the sale was passed over because there wasn’t a ‘buy’ button.
I read an article the other day on TechCrunch that I’d summarize as “Forces are holding Silicon Valley back from creating viable hardware companies.” One of the reasons that sticks out in my mind most from the article is that the largess of the incumbent players in the consumer hardware market make it extremely difficult to enter said market. The article used the recent stock gyrations of GoPro and by extension, the talk that the firm may be up for sale as proof of such a hurdle in the market.
I don’t know if GoPro is up for sale and I don’t know if they should sell. Heck, I don’t even know if you should buy or sell the stock – use your best judgement, not mine. However, I was interested if their financial data would show some sort of barrier to entry issues that has caused financial issues with the firm. For this exercise, I took a look at the quarterly income statements and balance sheet data (Courtesy of AmigoBulls), as well as the quarterly stock price for the firm (From Yahoo Finance) for the term of March 2013 to August 2017. The firm’s statements would indicate it went public in somewhere around June 2014 with an initial price of $24 a share, a little while after it officially adopted the name GoPro. The financial statements also contain data prior to the IPO that I assume is part of the regulatory filings necessary before the initial sale.
When you have a look at GoPro’s numbers it appears as though it’s a company that’s doing not terrible. Below are the Net Sales, CoGS, and Net Income.
Through January 2016, the firm seems to be growing quite well, with the rather predictable Christmas bumps in the numbers. There’s no spectacular growth here so it’s probably moving to maturity in the market with the current products it has.
The sales to cost of good sold remains more or less congruent through the period. That would indicate there hasn’t been any shenanigans with suppliers or absurd price pressure from ones with larger economies of scale. I’d assume that after January 2016, the low price competitors have entered the market and had begun swipe away the low hanging fruit, hence the drop in net income. By the next year, it would seem GoPro had puzzled out some responses to the invaders.
Looking at the Balance sheet, it would appear that these numbers would support a re-targeting of the firm by investing in areas where GoPro could best compete. This could be reflective of the drop in retained earnings and the increase in assets and liabilities. I’d assume the retained earnings were re-invested rather than being returned to shareholders during this period.
Going back to the income statement, it would show that R&D pulsed at the time sales dropped and had leveled off in the second quarter of 2017 where I’d assume those development efforts moved to production and instigated the need for capital spending support, thus the uptick in assets and liabilities at the end of the previous graph.
While I’ve dug out nothing more to go on – call it looking for Occam’s Razor…or laziness – this would indicate to me a rather stable firm with a management group that’s on their game at some level. But more importantly, these findings fail to indicate any tremendous pressure from a large established consumer electronics firm bent on crushing outsiders.
So where’s this insurmountable barrier for Silicon Valley hardware startups?
I think the real culprit can be found when looking at the movement of the stock price. Here’s the quarterly stock price for the firm for the same time periods above:
When you put the stock price up against the previous income statement graph, the issue becomes clear. In the beginning, the stock price seems to be independent of the movements of the underlying company metrics but falls more inline around January 2016.
Here’s my favorite graph so far.
What you see here is the average trailing PE for consumer electronics (and office) products as offered by Sterns in red. It’s the PE for the entire industry. The BLUE line is when you take the first quarter share price of the newly public company and use it to calculate the three quarters BEFORE launch from the provided EPS. After launch the blue line is calculated by using the real EPS divided into the correlated share price. The smaller spike before January 2015 is the IPO issuance.
My takeaway is it’s not that Silicon Valley cannot produce a competitive hardware company, it’s that they can’t price a hardware company for IPO adequately at best, and perhaps they don’t really understand the investment requirements that physical goods companies require at worst. What’s worse is investors to this day use that hype induced IPO spike as a barometer of the company’s real worth going forward – (a tribute to the power of behavioral finance and perhaps bad VC profit harvesting practice) damning the firm to under-performance perceptions going forward.
Further, GoPro may be a fine company at the scale it is now – and perhaps this is the scale it always should have been considered at. Pumping its stock price to the stratosphere served only to hurt the firm in the long run, which is why what seems to be a perfectly fine company has to fend off rumors of a potential sale rather than incrementally building itself – which is what most solid companies do that are not founded in Silicon Valley.
In the end, GoPro, TechCrunch’s pinata, may actually be priced pretty closely to fair right now, too bad it wasn’t in the beginning…but your metrics may vary.
Service oriented companies sometimes have it pretty easy. Service is their entire offering.
On the other hand, the product company studiously compares feature to feature, specification to specification in hopes of finding an edge with the customer. Maybe new positioning would help differentiate the product in a competitive market. Maybe pricing could create some much needed sales momentum. With everything that goes into marketing a competitive product it’s easy for a company to get caught up with how an item performs against others but neglect the encompassing service aspects that could make or break a sale.
A look into Cadillac’s new leasing focus is a good example of a product-focused firm’s heretofore careful work on product at the cost of the services surrounding buying process. As the article states, Cadillac has been producing an entire line of vehicles that can go toe-to-toe with European luxury brands but sales haven’t been reflective of those advancements in design, experience and technology. No real penetration into the luxury market Cadillac once owned a long time ago.
Not having worked at Cadillac, I could only postulate why this aspect was overlooked, but I’ve seen similar things happen in other markets – especially in highly competitive B2B markets where sometimes it’s exactly everything but the product that sets the offerings apart.
If the firm builds products, it’s critically important to consider every aspect of the customer’s sales journey. Sometimes those mechanical bits that a little product myopia doesn’t see becomes the most important aspect in a sale. Responsiveness, delivery performance and even sales or warranty terms can be the make or break point for a sale. The savvy have to keep focus on all these aspects if the firm is to remain competitive.
It struck me today that I don’t think I’ve heard anyone speak about these voice services from a particular marketing perspective. Sure, there’s been talk about marketing the devices, the services themselves and even how selling something would work on a platform where there’s no visual aspect to it. But let’s talk about something different. Let’s talk about branding.
The overall goal of branding a service, product or company is to basically connect the name or representation of the company to (typically) positive aspects that set them apart from their competition. It also comes in two flavors, the perception of what the company experience would be and perhaps even more important, the connection to the actual qualities experienced afterward.
Ideally, this allusion is also instrumental in nudging the buying process into the company’s favor. For instance you have DeWalt, a brand distinct from Black & Decker whose products are aimed at professional builders. Thus the branding works to exude an almost industrial-quality ruggedness. Or there’s Breitling who positions themselves as the timepiece of the elite sports enthusiast by connecting the products to pursuits of the wealthy like pylon racing or high-end auto racing. DeWalt tools are pictured at the job site, and Breitling is plastered all over events like Le Mans. That’s their best shots at reaffirmation and connecting with the target audience.
So how do the voice services do in this regard? There are a number of services out there but I’ll focus on the big four: Alexa, Siri, Cortana and Google Assistant. The first three are all brand new names in the personal electronics space. Google chose to go for something of a brand extension. Interestingly, the “…Assistant” part of the Google service doesn’t show up much. The other’s are also the call word to activate the service. One summons the Google Assistant with “Okay Google” Big deal, right?
Looking a bit more closely, the first three product names have no real baked in connection to their respective master brands. Each of the firms had to work diligently to connect the dots and construct not just the connection but the individual names themselves. It’s a much more difficult time to think of Cortana or Alexa as a component of the greater ecosystem of their respective companies. It’s been tough sledding to get people to recall shopping on Amazon with their Echo products. I’m sure it’s been rather dreadful trying to explain the utility of Cortana with Microsoft products and Siri’s, well, Siri – it seems to be around to help sell the phone you already have.
Then there’s Google’s foray. What’s so genius about saying “Okay Google” is that the company’s brand name is on the tongue of every person who uses the device. That’s something to really let sink in.
It’s a branding coup unlike anyone has ever seen. Google has figured out a way to have customers say its name over and over again – and (most times) in the exact moment when their main service is needed most. The name ‘Google’ is synonymous with search and now when people use their nearly omnipresent Google device they also repeat over and over again the brand of the company. Every utterance for the service is a reaffirmation in Google as the search of choice.
Previously, the best outcome was having a customer hear your brand name, there’s having them see it and there’s having them read it. To have the customer in their head connect the name of the company with its prime product over and over and over again from thinking to saying and being rewarded is almost the greatest psychological trick marketing could achieve.
Will Google be victorious in the space? I don’t know. From a branding standpoint, they’ve got the best foundation out there.
Curiously, VR is rearing its head at the same time big data and artificial intelligence technologies are washing ashore in businesses across the globe. As I’m sure it’s been hammered into everyone’s head, Big Data is the answer for everything (if you tend to believe the hyperbole) – unless AI is the answer. When looking at any number of Big Data articles, a common refrain is that more nuanced results come from more causes that combine in ways that were heretofore impossible to calculate, much less visualize with current technology.
What does Big Data look like? The more-or-less tangible manifestation of it is typically large databases which have a number of interlocking tables that connect data in ways that a piece of paper would have a hard time containing. It could also be large amounts of unstructured data or perhaps real-time streams of the stuff. Any one of these aspects make dumping data into a spreadsheet quite difficult to possibly impossible prospect. That also means we’ve essentially started butting against ends of what two dimensional spreadsheets can do without doing an excessive amount programming behind the scenes.
The world isn’t as simple as what spreadsheets can display, either. Or, more to the point, perhaps we’ve already harvested the bulk of the easy correlations and causations that can be seen. A great analogy is the bounty of insights found simply from moving data from paper records to the computer and having the capability to apply basic math capabilities to that data. The simple ones sound like knowing right now what the balance sheet looks like. Or even better, being able to show percentage values of where a firm spends money. Maybe even plotting product quality data to find unseen trends. That was cutting edge in the 60s, 70s, and 80s, but what was cutting edge yesterday is just not enough in the business of today and certainly not in the future.
Perhaps the next step in office applications is when we not just view but operate on these data sets in their multidimensional world rather than working to transcribe them into dumber formats. The ability to enter that 4D space with VR allows us to have that opportunity.
Increasingly we’ll also see artificial intelligence seep into our workplaces as well. It won’t enter through the Hollywood portrayals, it’ll come in small ways. Smarter applications that solve the easier problems and eventually round up insights on the usual subjects. Those usual subjects are the same ones that we spend a lot of time creating complex spreadsheets for. Humans won’t have to do that anymore. We’ll need to focus on where there’s more ambiguity, sensitivity and creativity for as long as it takes before our AI overlords to catch up.
All this means we’ll increasingly see ourselves operating on projects of increasing complexity during our workdays. How better to do so than to bring the benefits of VR to the business world. I could only guess what these applications will look like but I’m sure that they will allow us greater ease in manipulating greater density data – because that’s what the future looks like for the human worker.
Living in what is usually called ‘Flyover country’ we don’t get to see a lot of the more interesting ideas found under the definition of the ‘Sharing Economy.’ While that could be explained away in a number of ways like things just take a while to get here ( a good example is fashion, which, for some of the more unsavory trends, also seem to take too long to leave), that startups aren’t ready to expand into our market at this time, or that there’s just not the steaming cauldron of tech savvy people in the area, it’s lack of arrival also brings up musings about the limiting bounds of such services, namely density and anonymity.
Seeing as these services like Uber, TaskRabbit and any number of other “I have free time, how about I use an app to make a few bucks” services usually originate in the larger metropolises like New York, Seattle or the startup mecca of San Francisco, birth locations seem obvious. There is a certain density of pre-existing potential customers in these cities and probably a greater than average amount of willing early adopters as well. I won’t speak to the rest of the world because I won’t pretend to assume their functioning, but what happens when these sorts of services begin to be translated to less dense, ‘more conservative’ areas of the US? I think this is when the seams of theses services start to show.
The first thing that happens when you leave the high density city world is the pool of potential customers shrinks quickly. The customer base density plays a large part in the economics of the services. At a certain point in this migration, the reduction in population will move the service providers in Lyft or other services from the potential of full time employment to part time or even less. This may be a much larger issue than the companies let on.
If these things can’t be done as a primary job, most service providers will need a full time gig. I’d think this will cause a dearth of operators particularly during the 9-5s of the week (when nearly all of us work the regular job or go to school) and in drive time when the services may be needed most. Of course this reduction is self fulfilling as once there is less service providers there will be less utility for the customers and as an extension, less opportunity for the service to be useful to providers as there just isn’t enough supply to feel the gig lucrative.
When talking about areas outside of the largest US cities, population per area usually decreases as well. Of course, when the service extends to locations where the pure density of the city is sufficiently spread across a larger area, the density of service provider workforce also reduces. This will tend to reduce the convenience of the service. At a certain point it will reach the hurdle of just as convenient as an alternative, like just doing it yourself.
A good example of how geographies across the country differ might be comparing Oklahoma City to San Francisco. The OKC has the population of around 630,000 which could be considered almost similar to San Francisco with 860,000-ish – but the former stretches those people over 620 square miles while the latter consolidates its population in less than 50 square miles. With that amount of sprawl, the costs of the workforce will increase as transportation costs will begin to become a larger and larger factor in the choosing of assignments. Not to mention you’d just need more drivers or task people just to provide the same speed of service in OKC as in San Francisco. Driving across Oklahoma City is an investment in time. I can’t imagine doing it pedaling – even with the benefits of my carbon road bike. The costs of travel become a bigger issue.
With Uber and now Bodega, the sprawl offers another inherent issue. People are already used to driving their own conveyances and the cities are designed for driving. With sprawl comes more abilities to park making the drive more second nature than messing with new, possibly awkward outcomes. Who knows when you’ll get that Lyft back from the store. If you just drive your own car to the store you can almost guarantee you’ll get everything you need and more – no machine learning cycles are needed to get the peanut butter you like stocked in the vending machine.
Customers only change habits when the benefit is significantly larger than the pain of learning new things. If you’re already driving everywhere and it’s not too bad, the cost may be higher to figure out an app and wait than to keep driving to the Wal*mart.
The second, and perhaps most interesting situation that develops is as population shrinks, the possibility for anonymity does as well – and perhaps one of the central pillars of these services is the sharing app is necessary for connecting people who don’t know each other. Conversely, if the area isn’t large enough to sufficiently provide anonymity of the service provider, the chances of customers sidestepping the app to directly contact providers becomes an increasing concern.
Flyover country is typically portrayed as more personable – maybe the riders would get to know the service providers. Think that’s crazy? I know people in Chicago that know and only use certain cabbies. They would call them personally for rides rather than calling dispatch. If it happens there, it will certainly happen with the likes of Uber or TaskRabbit in a smaller city where there isn’t hundreds of Lyft drivers. It’ll be a nice 100% profit ride for the service provider, too, because they wouldn’t have to share with Lyft.
While I’m certainly not against the sharing economy – I lean on Uber quite a bit to be sure and would certainly love TaskRabbit to show up here in force – not all business models can be strapped onto every market.
Thinking more lucratively, perhaps there needs to be developed another set of sharing business models for the great midsection of the USA (or midsection of Germany, Russia or China for that matter) that takes into account the difference in resident behaviors, geography and density. Or maybe this is just where we enter the Craigslist zone?
When these models do develop, I’d doubt they would come from the coastal startup hot spots of today. What I wouldn’t doubt would be the value of these models may actually outpace their city-based cousins. It might be easier to scale these up rather than to scale the current ones down.
I’ve been seeing a lot of consternation over the invasion of the robots in the US working world. It seems the biggest fear is that these robots will lay waste to the remainder of the American manufacturing workforce. It’s a scary prospect, to be sure.
To see how bad it would be, I thought I’d have a look at how the robot apocalypse played out in other countries. I looked specifically at Japan and Germany. Both countries went all in at the very beginning of industrial robotics – much more than the United States did. My thinking is that if automation is as apocalyptic as feared, there would be easily found effects in these countries. The best place to see this would be in a nation’s unemployment numbers. Luckily, the St. Louis Federal Reserve Bank keeps track of such things.
The above chart is a comparison of unemployment numbers for each of the countries selected for the time period between 1970 and 1989. The period was chosen for beginning arguably before the robots. 1970 is regarded as the inception point for commercially available industrial robotics and 1988 is chosen because that’s just before Germany had to deal with reunification – an issue that’ll skew numbers for obvious reasons.
The graph easily points out that there was a surge in unemployment in 1975 and the early 1980s for two countries. Unfortunately, one was the U.S. and the other was Germany. This makes it difficult to prove robots did it, as the U.S., other than the auto industry, really didn’t see a lot of robotics adoption. In fact, the fluctuation of the American numbers could easily be explained by the S&L crisis and perhaps the returning vets’ pressure on the system after the Vietnam war.
Looking at the GDP for all three countries is also pretty inconclusive at this altitude. All three countries roughly follow the same trajectory. This is intriguing in that it would point out that the robots weren’t responsible for tremendous growth, either. Perhaps it could be posited that the machines were merely necessary to maintain their competitiveness in the market.
If it can’t be conclusively be stated (obviously this is not an exhaustive investigation, it is a blog post, after all) that robots are the workforce’s enemy and it also cannot be reasoned easily that they represent a tremendous economic advantage, what should we consider them?
I would rationalize them as the cost of doing business in the coming years, that is if the U.S. still wants to be in some sort of manufacturing business.
Borrowing this graph from Bloomberg (where I get the bulk of my news from, and you should too), my point about the cost of doing business becomes a bit more evident – or at least worth the consideration. While the graph is built for a story on the massive growth in robotics adoption in China, an equally important takeaway is that we can see how much the U.S. has to go to catch up with the other manufacturing powerhouses. This lag puts the U.S. at a little more than half the number of machines per person than the two comparative countries in this post. Could this lag end up costing us what’s left of our competitive ability (or merely cost of doing business) in that manufacturing capability we currently have? Perhaps this is the real subject we should be fearing will end up costing our jobs – not enough robots.
The store was never a price-competitive one. It never descended into the discount mud with Kroger or Safeway. Its draw was completely different. It is a destination store for a targeted audience.
If it’s losing shoppers it’s not because of the price (if price is the issue, they’d not have shoppers to begin with), it’s because the novelty of the store has worn a bit. The typical Whole Foods shopper isn’t that concerned about price, and they’re certainly not concerned enough to make the switch because the local grocery chain is having a sale. The switching is probably happening because the novelty of the store has aged to the point where the excitement to go has been bested by the convenience of a closer conventional grocery store.
What are the unique aspects of Whole Foods that attracted people to gladly pay more than other stores? Those aspects are many. For a start, it’s the pageantry of luxury where its shoppers can be seen affording to shop for artisanal local cheese and pay the organic tax. It’s because the store is also a very interesting restaurant offering many items that are new, exotic and perhaps even refined. It’s because the store carries local items like micro-brews and a far better wine selection than Yellow Tail or Fetzer (sorry Yellow Tail and Fetzer, but you know what I mean.)
Basically Whole Foods is the grocery store equivalent of buying a Tesla. The analysts’ rational, price conscious shopper probably wouldn’t buy a Tesla, they’d buy a Corolla (sorry Toyota, but you know what I mean) and drive it to Aldi.
So what is Whole Foods to do? It should stop listening to retail analysts and look at its own data. Look at how shoppers move through the store. Find the differentiators that aren’t denominated in dollars – the answers aren’t there.
The real strategy is to create again an aspirational destination for shoppers. To double down on the perception of exclusivity in the event of shopping is key. Create reasons other than the necessity of buying eggs and bacon to come to the store, and more importantly, stay at the store longer. Raise prices.
Think this is a preposterous idea? Well, let’s go back to Economics class and visit what’s called the Veblen good. Below is the graph for such a product.
Looks quite different than the regular supply and demand curve, right? Well when we leave the rational world and look at how real people behave, we get the Veblen curve where when something is priced high enough it’s perceived as higher quality and thus in greater demand.
So while analysts may be right about lowering prices to be more competitive for companies like Aldi and Safeway, they’d better reconsider what sort of shopper Whole Foods has because they operate far to the north of the Veblen curve vertex.
Across the article, the author goes on a journey explaining software development programs that span an already exceptional career working on a number of high visibility projects. While I’m not going to do the injustice of paraphrasing the post here, I’d really like to highlight it for its indirect lessons in product management, the product life-cycle and the strategic arguments that I’m sure a lot of product managers have had – even ones outside the software development world.
The aspect I’m most intrigued by is how the post fleshes out the theories elucidated in The Innovator’s Dilemma. While I’m sure most are aware of the mechanisms in the book that lead to market leaders being usurped by upstarts, Crowley’s post floats a rather correlative trajectory in the notion that product complexity is one of the more potent causes for the increasingly slow movements of market leaders in the software industry. By deduction, the lack of complexity in products becomes the grease that slides new entrants past the established.
While The Innovator’s Dilemma points to an all-consuming capital and institutional investment in one particular technology or process that ends up handcuffing the firm when it becomes time to pivot, Crowley seems to indicate that this sort of ‘handcuffing’ in the software world manifests itself in the scale and structure of the code base. Over time, seems these code bases are just as difficult to change as a production facility or complex supply chains. The lack of complexity is exactly how simple things have that agility to make inroads against giants.
Please give this a read, it’s long but worth it. There’s also a lot of other gems in the article mine, as well. Personally, I find it quite satisfying to change out the specifics of software design and substitute the verbiage from other industries. I’m sure it’ll be enlightening for electronic controller market, yogurt manufacturers or other industries beyond software.