Curiously, VR is rearing its head at the same time big data and artificial intelligence technologies are washing ashore in businesses across the globe. As I’m sure it’s been hammered into everyone’s head, Big Data is the answer for everything (if you tend to believe the hyperbole) – unless AI is the answer. When looking at any number of Big Data articles, a common refrain is that more nuanced results come from more causes that combine in ways that were heretofore impossible to calculate, much less visualize with current technology.
What does Big Data look like? The more-or-less tangible manifestation of it is typically large databases which have a number of interlocking tables that connect data in ways that a piece of paper would have a hard time containing. It could also be large amounts of unstructured data or perhaps real-time streams of the stuff. Any one of these aspects make dumping data into a spreadsheet quite difficult to possibly impossible prospect. That also means we’ve essentially started butting against ends of what two dimensional spreadsheets can do without doing an excessive amount programming behind the scenes.
The world isn’t as simple as what spreadsheets can display, either. Or, more to the point, perhaps we’ve already harvested the bulk of the easy correlations and causations that can be seen. A great analogy is the bounty of insights found simply from moving data from paper records to the computer and having the capability to apply basic math capabilities to that data. The simple ones sound like knowing right now what the balance sheet looks like. Or even better, being able to show percentage values of where a firm spends money. Maybe even plotting product quality data to find unseen trends. That was cutting edge in the 60s, 70s, and 80s, but what was cutting edge yesterday is just not enough in the business of today and certainly not in the future.
Perhaps the next step in office applications is when we not just view but operate on these data sets in their multidimensional world rather than working to transcribe them into dumber formats. The ability to enter that 4D space with VR allows us to have that opportunity.
Increasingly we’ll also see artificial intelligence seep into our workplaces as well. It won’t enter through the Hollywood portrayals, it’ll come in small ways. Smarter applications that solve the easier problems and eventually round up insights on the usual subjects. Those usual subjects are the same ones that we spend a lot of time creating complex spreadsheets for. Humans won’t have to do that anymore. We’ll need to focus on where there’s more ambiguity, sensitivity and creativity for as long as it takes before our AI overlords to catch up.
All this means we’ll increasingly see ourselves operating on projects of increasing complexity during our workdays. How better to do so than to bring the benefits of VR to the business world. I could only guess what these applications will look like but I’m sure that they will allow us greater ease in manipulating greater density data – because that’s what the future looks like for the human worker.
Living in what is usually called ‘Flyover country’ we don’t get to see a lot of the more interesting ideas found under the definition of the ‘Sharing Economy.’ While that could be explained away in a number of ways like things just take a while to get here ( a good example is fashion, which, for some of the more unsavory trends, also seem to take too long to leave), that startups aren’t ready to expand into our market at this time, or that there’s just not the steaming cauldron of tech savvy people in the area, it’s lack of arrival also brings up musings about the limiting bounds of such services, namely density and anonymity.
Seeing as these services like Uber, TaskRabbit and any number of other “I have free time, how about I use an app to make a few bucks” services usually originate in the larger metropolises like New York, Seattle or the startup mecca of San Francisco, birth locations seem obvious. There is a certain density of pre-existing potential customers in these cities and probably a greater than average amount of willing early adopters as well. I won’t speak to the rest of the world because I won’t pretend to assume their functioning, but what happens when these sorts of services begin to be translated to less dense, ‘more conservative’ areas of the US? I think this is when the seams of theses services start to show.
The first thing that happens when you leave the high density city world is the pool of potential customers shrinks quickly. The customer base density plays a large part in the economics of the services. At a certain point in this migration, the reduction in population will move the service providers in Lyft or other services from the potential of full time employment to part time or even less. This may be a much larger issue than the companies let on.
If these things can’t be done as a primary job, most service providers will need a full time gig. I’d think this will cause a dearth of operators particularly during the 9-5s of the week (when nearly all of us work the regular job or go to school) and in drive time when the services may be needed most. Of course this reduction is self fulfilling as once there is less service providers there will be less utility for the customers and as an extension, less opportunity for the service to be useful to providers as there just isn’t enough supply to feel the gig lucrative.
When talking about areas outside of the largest US cities, population per area usually decreases as well. Of course, when the service extends to locations where the pure density of the city is sufficiently spread across a larger area, the density of service provider workforce also reduces. This will tend to reduce the convenience of the service. At a certain point it will reach the hurdle of just as convenient as an alternative, like just doing it yourself.
A good example of how geographies across the country differ might be comparing Oklahoma City to San Francisco. The OKC has the population of around 630,000 which could be considered almost similar to San Francisco with 860,000-ish – but the former stretches those people over 620 square miles while the latter consolidates its population in less than 50 square miles. With that amount of sprawl, the costs of the workforce will increase as transportation costs will begin to become a larger and larger factor in the choosing of assignments. Not to mention you’d just need more drivers or task people just to provide the same speed of service in OKC as in San Francisco. Driving across Oklahoma City is an investment in time. I can’t imagine doing it pedaling – even with the benefits of my carbon road bike. The costs of travel become a bigger issue.
With Uber and now Bodega, the sprawl offers another inherent issue. People are already used to driving their own conveyances and the cities are designed for driving. With sprawl comes more abilities to park making the drive more second nature than messing with new, possibly awkward outcomes. Who knows when you’ll get that Lyft back from the store. If you just drive your own car to the store you can almost guarantee you’ll get everything you need and more – no machine learning cycles are needed to get the peanut butter you like stocked in the vending machine.
Customers only change habits when the benefit is significantly larger than the pain of learning new things. If you’re already driving everywhere and it’s not too bad, the cost may be higher to figure out an app and wait than to keep driving to the Wal*mart.
The second, and perhaps most interesting situation that develops is as population shrinks, the possibility for anonymity does as well – and perhaps one of the central pillars of these services is the sharing app is necessary for connecting people who don’t know each other. Conversely, if the area isn’t large enough to sufficiently provide anonymity of the service provider, the chances of customers sidestepping the app to directly contact providers becomes an increasing concern.
Flyover country is typically portrayed as more personable – maybe the riders would get to know the service providers. Think that’s crazy? I know people in Chicago that know and only use certain cabbies. They would call them personally for rides rather than calling dispatch. If it happens there, it will certainly happen with the likes of Uber or TaskRabbit in a smaller city where there isn’t hundreds of Lyft drivers. It’ll be a nice 100% profit ride for the service provider, too, because they wouldn’t have to share with Lyft.
While I’m certainly not against the sharing economy – I lean on Uber quite a bit to be sure and would certainly love TaskRabbit to show up here in force – not all business models can be strapped onto every market.
Thinking more lucratively, perhaps there needs to be developed another set of sharing business models for the great midsection of the USA (or midsection of Germany, Russia or China for that matter) that takes into account the difference in resident behaviors, geography and density. Or maybe this is just where we enter the Craigslist zone?
When these models do develop, I’d doubt they would come from the coastal startup hot spots of today. What I wouldn’t doubt would be the value of these models may actually outpace their city-based cousins. It might be easier to scale these up rather than to scale the current ones down.
I’ve been seeing a lot of consternation over the invasion of the robots in the US working world. It seems the biggest fear is that these robots will lay waste to the remainder of the American manufacturing workforce. It’s a scary prospect, to be sure.
To see how bad it would be, I thought I’d have a look at how the robot apocalypse played out in other countries. I looked specifically at Japan and Germany. Both countries went all in at the very beginning of industrial robotics – much more than the United States did. My thinking is that if automation is as apocalyptic as feared, there would be easily found effects in these countries. The best place to see this would be in a nation’s unemployment numbers. Luckily, the St. Louis Federal Reserve Bank keeps track of such things.
The above chart is a comparison of unemployment numbers for each of the countries selected for the time period between 1970 and 1989. The period was chosen for beginning arguably before the robots. 1970 is regarded as the inception point for commercially available industrial robotics and 1988 is chosen because that’s just before Germany had to deal with reunification – an issue that’ll skew numbers for obvious reasons.
The graph easily points out that there was a surge in unemployment in 1975 and the early 1980s for two countries. Unfortunately, one was the U.S. and the other was Germany. This makes it difficult to prove robots did it, as the U.S., other than the auto industry, really didn’t see a lot of robotics adoption. In fact, the fluctuation of the American numbers could easily be explained by the S&L crisis and perhaps the returning vets’ pressure on the system after the Vietnam war.
Looking at the GDP for all three countries is also pretty inconclusive at this altitude. All three countries roughly follow the same trajectory. This is intriguing in that it would point out that the robots weren’t responsible for tremendous growth, either. Perhaps it could be posited that the machines were merely necessary to maintain their competitiveness in the market.
If it can’t be conclusively be stated (obviously this is not an exhaustive investigation, it is a blog post, after all) that robots are the workforce’s enemy and it also cannot be reasoned easily that they represent a tremendous economic advantage, what should we consider them?
I would rationalize them as the cost of doing business in the coming years, that is if the U.S. still wants to be in some sort of manufacturing business.
Borrowing this graph from Bloomberg (where I get the bulk of my news from, and you should too), my point about the cost of doing business becomes a bit more evident – or at least worth the consideration. While the graph is built for a story on the massive growth in robotics adoption in China, an equally important takeaway is that we can see how much the U.S. has to go to catch up with the other manufacturing powerhouses. This lag puts the U.S. at a little more than half the number of machines per person than the two comparative countries in this post. Could this lag end up costing us what’s left of our competitive ability (or merely cost of doing business) in that manufacturing capability we currently have? Perhaps this is the real subject we should be fearing will end up costing our jobs – not enough robots.
The store was never a price-competitive one. It never descended into the discount mud with Kroger or Safeway. Its draw was completely different. It is a destination store for a targeted audience.
If it’s losing shoppers it’s not because of the price (if price is the issue, they’d not have shoppers to begin with), it’s because the novelty of the store has worn a bit. The typical Whole Foods shopper isn’t that concerned about price, and they’re certainly not concerned enough to make the switch because the local grocery chain is having a sale. The switching is probably happening because the novelty of the store has aged to the point where the excitement to go has been bested by the convenience of a closer conventional grocery store.
What are the unique aspects of Whole Foods that attracted people to gladly pay more than other stores? Those aspects are many. For a start, it’s the pageantry of luxury where its shoppers can be seen affording to shop for artisanal local cheese and pay the organic tax. It’s because the store is also a very interesting restaurant offering many items that are new, exotic and perhaps even refined. It’s because the store carries local items like micro-brews and a far better wine selection than Yellow Tail or Fetzer (sorry Yellow Tail and Fetzer, but you know what I mean.)
Basically Whole Foods is the grocery store equivalent of buying a Tesla. The analysts’ rational, price conscious shopper probably wouldn’t buy a Tesla, they’d buy a Corolla (sorry Toyota, but you know what I mean) and drive it to Aldi.
So what is Whole Foods to do? It should stop listening to retail analysts and look at its own data. Look at how shoppers move through the store. Find the differentiators that aren’t denominated in dollars – the answers aren’t there.
The real strategy is to create again an aspirational destination for shoppers. To double down on the perception of exclusivity in the event of shopping is key. Create reasons other than the necessity of buying eggs and bacon to come to the store, and more importantly, stay at the store longer. Raise prices.
Think this is a preposterous idea? Well, let’s go back to Economics class and visit what’s called the Veblen good. Below is the graph for such a product.
Looks quite different than the regular supply and demand curve, right? Well when we leave the rational world and look at how real people behave, we get the Veblen curve where when something is priced high enough it’s perceived as higher quality and thus in greater demand.
So while analysts may be right about lowering prices to be more competitive for companies like Aldi and Safeway, they’d better reconsider what sort of shopper Whole Foods has because they operate far to the north of the Veblen curve vertex.
Across the article, the author goes on a journey explaining software development programs that span an already exceptional career working on a number of high visibility projects. While I’m not going to do the injustice of paraphrasing the post here, I’d really like to highlight it for its indirect lessons in product management, the product life-cycle and the strategic arguments that I’m sure a lot of product managers have had – even ones outside the software development world.
The aspect I’m most intrigued by is how the post fleshes out the theories elucidated in The Innovator’s Dilemma. While I’m sure most are aware of the mechanisms in the book that lead to market leaders being usurped by upstarts, Crowley’s post floats a rather correlative trajectory in the notion that product complexity is one of the more potent causes for the increasingly slow movements of market leaders in the software industry. By deduction, the lack of complexity in products becomes the grease that slides new entrants past the established.
While The Innovator’s Dilemma points to an all-consuming capital and institutional investment in one particular technology or process that ends up handcuffing the firm when it becomes time to pivot, Crowley seems to indicate that this sort of ‘handcuffing’ in the software world manifests itself in the scale and structure of the code base. Over time, seems these code bases are just as difficult to change as a production facility or complex supply chains. The lack of complexity is exactly how simple things have that agility to make inroads against giants.
Please give this a read, it’s long but worth it. There’s also a lot of other gems in the article mine, as well. Personally, I find it quite satisfying to change out the specifics of software design and substitute the verbiage from other industries. I’m sure it’ll be enlightening for electronic controller market, yogurt manufacturers or other industries beyond software.
The HBR article, How to Win with Automation, got me thinking a bit. While the title of the article had me expecting something different than what was presented in the writing itself, it does present a glimmer of what jobs would be like in the future.
The author, Greg Satell, may be onto something about the evolution of the worker in the face of all the developing technologies we see, like artificial intelligence, big data science and the internet of things (there, I said them – this is now a hip, cool business blogpost, right?) The article boils down to Satell positing that the human worker going forward will be working in the role of social interaction. Certainly a fair point, but I’d like to present an alternate path for the worker of the future.
It’s true that social interaction is important for business, especially at the consumer level, but increasingly these tasks are again being muscled into by things like artificial intelligence. Watson can talk to you, and interactive bots that are less erudite can fill a lot of the customer service gaps left over. Couple this with the streamlining of business processes to prevent customer drop off, and actually putting solid UX thought behind user interfaces & interactions for tasks like applying for loans or even being an arbiter of entertainment taste, it makes it easy to predict that the vast majority of what we see as social interactions today be filled by robots in the not too distant future. Chances are, they won’t be cute little Wall-E machines but instances on servers somewhere and doing such a good job we won’t even be aware they’re actually software.
What, then, does that leave us regular humans to do? The same things we did when machines first invaded during the industrial revolution: be ready to react to the events that machines cannot.
A good example of this are commercial pilots. Today’s pilots may perform the take offs and landings by hand, but level flight to their destination is being taken over by machines. Today these pilots program the flight path, line up the plane for takeoff and launch the plane manually. Then, once the plane is safely at cruising altitude, the pilot engages the guidance system to take it to its destination. What does that leave pilots to do? A lot of things. The biggest task is to be ready to take action in the situations where significant deviation from the process occurs to the point where machine intelligence cannot cope.
It’s much the same for managing industrial robotics. Sure, the machines can do the bulk of the work, leaving operators to seemingly twiddle their thumbs and wait for replacement, but the primary goal is to be there when things go wrong. To reach in and fix the feed errors when the robot stalls or hit the stop button when the tool bit breaks.
All programming is finite and the march of technology so fast that all conditions certainly cannot be accounted for. This gap is the niche that humans will work in – of course this gap will narrow day-by-day as the automation hones itself over time. For now, this will be what people will be there for: to make decisions and take actions after a significant amount of sigmas are crossed.
I am intrigued by Brad Feld’s three machine concept of how a business should be run. Its simplicity is tantalizing, to be sure. The more I thought about it, the more I’d like to posit an alternative.
As I’ve understood it to be laid out, the concept has three components (not just a clever name): a Customer portion, a Product portion and a Company portion. While it’s easy to guess what the first two do, the third, the Company’s role, is to run everything else that goes into the function of a firm as well as to oversee the other two. I’m thinking that this layout is good for a snapshot in time of a business, but over time it could become a bit of a hazard to the continued growth of the company.
Call me a bit grizzled in terms of corporate structures, but after being in a few of them and having to work with or compete against others in various roles, a common setup you’ll see in firms is that they’ll have a product group – and that could be made of engineers, product designers, programmers or a combination of those, and then you’ll have a sales group filled with usual marketing and sales suspects. Much like Mr. Feld’s plan, there’s someone that runs each of those with either a D, P, or C in the title. The common situation that develops is that these sorts of structures create ‘silos.’ Maybe it’s because of creepy land-grab politics or the trappings of too lean an organization but your silo leader tends only to focus on their own areas and are not excited about reaching beyond it. Over time, I figure that the Three Machines will yield similar silos that operate for their own interests just like any other corporation in the US with a sales department and an engineering/product department.
What will this situation look like? Well on one hand, you’ll have sales devolving into customer relationship/cost model tactics and you’ll have the product group developing products they *think* the market will need – without much input beyond the company’s walls. This is basically the corporate dystopian world of the Innovator’s Dilemma.
What I’d think would be a better system would be to have a Customer/Application group and a Market research/Development group. They would essentially break down to into the classic short-term (~1 year) and long-term (>1-5 year), respectively. Then, much like Mr. Feld’s model, you’d have a corporate group that determines the objectives and intensity that each would receive over time.
The goal of the split would be to, in the short term, have a group that can focus on your current customer’s needs and desires utilizing your current products (Customer/Application group). That focus will manifest itself in tweaking the current product, pricing or marketing from customer feedback and quarterly market forces.
The Market research/Development group would then be a product development partnership driven by longer-term movements in the market. The group will develop new products for where the market will be heading in the long term. These new products would be ready when current products move from “Stars” to “Cows.” This would make sure the company both has attention on its current customers as well as to position itself with new products in the future.
The Company group works to ensure that the firm is appropriately focused on either the near term or the long term as necessary and to make sure the firm will remain financially healthy in the meantime. Its other goal is to choose when to introduce the new products and de-emphasize the older. Being that the company group is distinct (and maintains that distinction) from the other two groups, it should have a more impartial view that would lead to a better strategic decision making.
Taking Mr. Feld’s lead, I think I’ll revisit this a bit. Maybe I’ll make some MacPaint graphs to show off my design skills. And finally, if you, Mr. Feld, do read this, just know that I’m riffing on your thoughts…and I obviously read your posts as a fan!
Hyundai has recently launched it’s new Genesis line of cars. They are to be vehicles of quality far beyond what you’d expect from the conventional Sonata and more inline with higher-end European imports. If the brand name sounds familiar, that’s because Hyundai has been trying and failing at selling a car into the luxury market called Hyundai Genesis for some time now.
Why have they had such a hard time of it under the Hyundai name? It’s a question of how far you can stretch the qualities of a brand before its limits are reached. Since it’s arrival in the US, the Hyundai automobile has developed a reputation for a reliable, cost-effective product. Increasingly, it’s been known to be of perhaps higher quality than equally-priced vehicles in their markets. I’d assume that the Sonata is to be put against the Ford Fusion, but really it could easily compete with it’s larger, more robust sibling, the Taurus. That’s how to create value. Hyundai is a master at it. The problem is that the luxury automobile market is not connected by a gradient of quality from standard car offerings. It’s a tribe unto its own. Just as a company like Cadillac has issues reaching down to the middle market, a brand like Hyundai has just as difficult a time reaching up.
A luxury car owner wants to be portrayed as set apart from the commoners, and that’s part of the reason for the difference in price. Quality is almost secondary to the inherent billboard that states the owner of a BMW or a Lincoln has the financial means to purchase one. The Genesis may always have had the qualities like a Lincoln but the name doesn’t garner that same economic pedestal that the latter’s badge indicates. In short, no Cadillac buyer is going to pass up a Cadillac showroom to mingle with the masses at a Hyundai dealership. Hyundai just could not get the right shopper to look at the car with the current brand.
After maybe a decade of trying, Hyundai finally succumbed to the process of building a new brand that could be distinct and one that could be positioned in the right space with the target customer. This is no surprise, as Honda (Acura), Toyota (Lexus) and Nissan (Infinity) have all done the same thing. This is opposed to Volkswagen who never figured it out with the Phaeton.
Someone I met recently received a call from an Apple recruiter. My friend is quite well known as a high performer in their field (to protect the innocent, I’ll leave things comfortably vague) and certainly no stranger to the calls of unsolicited recruitment. On this occasion, it was for a position that certainly would have high impact on the firm and as a matter of course, would require this person to work at the new mother ship. Relocation would be included. Compensation was discussed at the early stage as the costs of the San Francisco area were of concern for someone not quite sure of the city lifestyle. My friend also implied that any move won’t be a horizontal move – either in title or by wage. Apple said they would get back to my friend, although not without inferring Apple was a ‘Big Deal’. Nothing happened for a while and my friend had found that another, less experienced person got the same call they did, albeit about a week after my friend’s call.
Why am I doing my best to ‘turn a phrase’ on a story like this? Easy. It’s because it’s endemic of what we could expect from the new Apple going forward. That expectation isn’t steadfast resolve to keep the company head-and-shoulders above the competition in every aspect, it’s something else. Something inherently more average than what we’ve seen in the past.
Obviously there must be more than a certain, highly qualitative tale of missed employment opportunity. The point of the story is to call out the wholesale difference in how the firm is operating now versus when Steve was at the helm. In short, we’re now seeing Apple turning into a firm that’s no longer going to lead the markets, it’s striving to be merely cost-competitive. The difference between a manager with vision and one with a financial calculator.
So that’s a bold statement, to be sure, but let’s look at a few details that give the idea some credence – or at least a pause for consideration. To illustrate, I’ll compare activities now to the perceived methodology during the Steve era.
My first example is the iWatch… or Watch. There’s no doubt the company was working on this project well before Steve died. They didn’t release one even when an army of Android competitors started belching out versions. After Steve passed, Apple put out the Watch. They did it ham-fistedly, they released sketches, pictures of early concepts and slipped specifications. The company even seemed to have a false-start launch.
When the Watch did come out, Apple followed the Steve playbook and built all the presentations and materials in a manner that aped the Steve way – except for one thing: the company had no killer genius app, function or even emotion to attach to it. It was merely a me-too product that appeared as a plug to fill a product line gap. I’ll bet Steve knew that, and that’s why he never released it. The difference between Jobs and Cook may be that Steve knew when to not listen to analysts and say “no” to investors for the good of the company. The watch still has never reached the penetration it was expected to.
Another example is the firm’s seeming inability to further innovate on the iPhone platform. The user interface has not been updated by any noticeable degree, aside from styling the icons in forever. It took a near customer revolt to get a phablet-style iPhone out and perhaps the two most damning aspects come from the iPhone’s case. Number one was the bending debacle. While I initially thought it was hyperbole, I was recently witness to it actually happening to a series one phone in a wholly unchallenging circumstance. The second is the warning with the new black plastic case iPhone7
Why are these damning things? Because they could easily be blatant attempts at cost-cutting at the cost of product quality. The bending could be an engineering overlook certainly, but it could also just as easily indicate the firm is not devoting enough resources to design or are actively looking to take cost out. Perhaps from reducing machining time or using cheaper materials.
Speaking of cheaper materials, the new phone comes with a plastic case version. Obviously, plastic is far cheaper to make than aluminum, especially when you’re using a materials that noticeably takes on wear marks. Ask yourself: would this move happen with Steve’s dedication to customer experience?
Speaking of innovation, one can’t overlook the buying spree the firm has been on since Steve has left. You can read a lot of business books and articles that point to acquisition as a large company’s best bet to stay relevant in a changing market. In fact, HP even has a venture capital arm, and so does Microsoft. Speaking of purchases, the scatter-shot buying pattern is also concerning as it could easily come off as straw grasping. If Apple is buying so many companies does that mean that the once great innovator is idea-dead on the inside?
While sales of the products are still ridiculously high, the year-over-year numbers have seen pronounced drops – double digit drops Both drops are laid at the feet of two product lines that have seen the greatest market innovation in competitors instead of in Apple products. Those would be the iPhone and the Mac area.
Now the company is talking about repatriating a large sum of capital. On the surface, that may seem normal, but for a firm that put up so much stink about doing so previously, it seems okay with swallowing the huge tax penalty. Apple must really need that cash if they’re willing to part with over a third of the value.
So when you look back at the story that started this post off, you can see that perhaps the desire for margin, (out of the bottom end of the balance sheet) may manifest in selecting not the best person for the job at any cost (as Steve used to do) but to focus on cost before performance in the short term. Doesn’t sound like a company that throws the hammer anymore, but the one that’s receiving it.
There’s been a lot of sentiment that’s pointing to an upcoming recession. The pundits count the rather sharp down ticks in a number of markets as proof. There’s a lot that can be pointed to as the impetus for these movements. China’s over-production is one, the under performance of the Chinese market could be another. There’s the explosion of oil on the market that’s either helping or hurting. There is too much easy credit to be had in the US or too little easy credit available in the US. The Middle East is blowing up and then there’s always what whatever is going on (or not going on) in Europe. Playing games of permutations and combinations with these can get you any culprit you’d like.
My thinking is that all of these recession problems are not from whatever sort of immediate geo/financial/political issue we’re seeing currently but it’s more from simply how one uses the term “…pre-recession levels”.
For example, if you use the term as meaning “…pre-2007 levels” your graph of the global price of industrial metals from the St. Louis Fed looks like this:
But if you were concerned instead with not just the last 10 years but the last 20 years, your graph would look like this:
The difference from considering ten years to 20 or more years makes a big difference in how things look, huh?
What’s the problem with using this term as meaning from 2005-ish on? The problem is for nearly everyone using it, the graph looks like a typical recession/correction graph and gives perhaps false hope that the bubble of 2003-7 could be attainable again, and quickly at that. If one has a look at many of the easily available indicators including this minerals index, they show a similar rather swift uptick starting around 2003 to a level that’s at least twice the amount of the last ten years. I’m no economist, but I’m thinking the speed at which that jump took place and the distance it covered would indicate that the period between 2005 and 2007 – and by extension, the first part of 2015 – would be an unsustainable high, but certainly not a new normal.
If this is true (and only time will tell) then what we’re probably going to see is a return to the relative band of production and pricing seen prior to 2003. That would make this drop not a recession, but a correction to the band that things were moving in prior to 2003. We would return to not a ‘new normal’ but after the excess is driven out of the markets, we would fall back to the old normal that we had been in all the way back to the disco era.
The culprits for this jump section are probably the coming online of China as an industrial power and ridiculous credit availability in the US. One could be argued as a sort of false demand as it was created mostly by the Chinese government synthetically stimulating the market and more importantly, convincing the rest of the world into believing that this focus of stimulation would go on forever (a new normal). China doesn’t need to continue building that area of their economy and therefore we have reached the end of “forever”.
If things must have to return to pre-2003 levels. The world will have to right-size not just have to wait out a recession.