This is a poem written by Chat GPT (guess the prompt):
The sentiment around AI shifts from “AI is going to dominate us” to “it’s a ubiquitous part of company building” back to “AI is going to dominate us” each time a mindblowing new feature is launched, like Sora. Our collective emotionality around AI seems to oscillate quite dramatically between sensational and ubiquitous.
Back when Chat GPT was first launched, the hype was insatiable. Then, in realtime, many began to questioned the hype, and it still sustained. Next, merely months later, AI became a base assumption in company building. Both incumbents and startups alike launched AI-enabled feature after feature, delighting customers in real time. We even forgot about it for a while, moving on to AR and the Vision Pro, but now with the launch of Sora’s astonishing video-generation capabilities, the discourse is back on again, full force.
This pattern is staggering because it feels like the Gartner hype cycle in quick succession: exuberance, skepticism, and productivity, followed by exuberance...
These patterns suggest that the fundamental assumptions about technology adoption are changing – we are building new technology faster than ever before, and we are also accepting it into our lives faster than ever before.
Chat GPT reached mass adoption 60X faster than Facebook an 30X faster than Spotify which tell us the rate of change in technology, thus the life blood of VC, has fundamentally grown by orders of magnitude. This makes our job as VCs a lot harder but also a lot more fun.
Meanwhile, language models are also commoditizing faster than we’ve ever seen foundational infrastructure commoditize. This takes the focus off of just investing in great technology as such but businesses powered by great technology.
We also expect that vertical AI will help software finally penetrate traditionally tech laggard markets such as healthcare, manufacturing, government, and services industries. So we are excited to dig into sectors where data is being created and stored digitally for the first time ever. (https://northzone.com/2023/10/12/perspectives-vertical-ai-todays-data-whisperer-tomorrows-office-linchpin/)
This wave of disruption will look more like consumer tech adoption, even if the businesses are b2b. Many industries will now be able to use product-led (b2c2b) sales motions because AI-powered products will be more intuitive than ever.
(https://northzone.com/2023/12/15/perspectives-freeing-smbs-from-franchises-through-ai/)
A bunch of ethical questions will arise from this new wave of technology adoption that we have never encountered before, and we will have to steer ourselves in real time to make sure we are making the technology work for us vs the other way around.
What’s clear to us is that AI is already core enabling technology for almost every company we are evaluating, similar to cloud. Since the innovation cycles are much more accelerated, it is hard for incumbents, startups, and investors alike to fully know where the puck is going. Everyone is hyper-vigilant around what is being built and buzzing about it in real time, and the excitement and anticipation is real. In this frenzy, it’s more important than ever to bring our attention back to some of the bigger questions around how we steer this technology rather than letting it steer us.
So far the growth of technology has been camouflaged in ideals like connectivity, democratization, productivity, and creativity, with major tech companies toting these as their values (Mission Statements of Top Tech Companies).
However in most of these cases, the growth hasn’t been driven by the ideal, it’s been for the sake of growth itself, which is driven by our unconscious needs and desires. This is because technologies, like organisms, evolve based on survival of the fittest, perpetuated by human willingness to pay via money and attention, which it then monetizes. The more it appeals to our needs, the more it grows. For example, we have been paying with our preference data, delivered through swipes and likes for Instagram reels that give us in exchange quick hits of dopamine or ego strokes in the form of views and followers. That this exchange happens isn’t the scary part, the scary part is that it often goes unexamined, and the hidden costs are often greater than we anticipated. Some very sad but very real examples include the dramatic decline in teen mental health, increase in screen and porn addictions, and the polarization and dysfunction of political systems, amongst others.
This isn’t saying technology hasn’t done a lot of good for our society, just that we don’t have the tools to account for the societal costs incurred, therefore we don’t have any control over this part of the relationship.
We can also argue that technology only magnifies the preferences of society, empowering us to get more of what we want, thus the never-ending debate over what role tech giants should have over what their platforms enable. It’s also the same underlying thread for the debate around AI governance that has played out in the media the past few months. Zooming out a bit, there are no shortage of narratives about how technology and humans have co-evolved these past 100 years starting from the industrial era — countless studies on its effect on society, the families, gender roles, quality of life, mental health, etc. I won’t look to resolve the topic in this post because the relationship between technology and humans is far too complex to just pin the blame on one side or the other.
However, a problem with any relationship is usually the result of a dynamic rather than purely one-sided, and therefore the solution also has to have two sides. Whereas technology has no conscience nor values, we do.
Of the players who typically define the values governing our society, no one wants to or can step up to the plate. Our government regulators can’t get their skates on fast enough; big tech companies have no incentive to act; and religions haven’t evolved with technology in mind.
Therefore, none of our existing value systems address how we interact with technology, so every family, is left alone to come up with our own solution.
This is a scary scene for modern parents, myself included, who have to answer questions like these on their own:
What are some of the values around how interact with technology?
Should we have screen-time limits? What about AR?
Should there be stronger age limits? What about for different formats?
Should the internet have photos of our children’s faces? Should our data be private and self-sovereign?
Should we have stronger emotional boundaries with technology? Can we “trust” AI? Should we bond with it?
Should we adhere to a list of pro-human values like natural > digital, creative > productive, empathy > power, decentralization > centralization, human > algo?
How do these values fit with our existing systems?
Is chatting with a relationship bot cheating? Are those real love relationships?
Is it appropriate to use Chat GPT for homework? What should we be learning and what should we be outsourcing?
How do we think about spending on digital goods? Should kids be allowed to spend all of their allowance on Fortnite?
Who develops these values and maintains them? How frequently?
A distributed protocol controlled by humans?
An incorruptible algorithm that updates itself?
A board?
As I have yet another serious discussion with my 9 year old about our “overly-strict” screen time limits vs. his friends’, I am also realizing how difficult it is to set and maintain these standards alone. Even as a parent who is privileged enough to be well-versed in this world, I still find it extremely difficult to think through and act on these decisions on a daily basis. It is so tempting to default to the lowest common denominator. However, a parent’s will to help their child be successful in this new digital world is probably the strongest driver of defining some of these rules. I am optimistic that this is where it begins.