What's the Big Deal, Anyway? (Part 2)
- kameronsprigg
- Apr 5, 2024
- 8 min read
Updated: Apr 7, 2024
This week, we have a new player entering the game. Apple has been largely silent the last couple of years in the AI space. Just in the time that I posted part one of this series, there have been two major updates, one of which sets the stage for a development as significant in AI as the release of the first smartphone was to society. Leave it to Apple to make waves on this scale in yet another field.
Apple just released some of their research which describes a Language Model that is 1000 to 10,000 times smaller than GPT-4, but is able to perform its assigned tasks at or above GPT-4 level. A model this small would be able to run locally on your phone.
That is a really big deal, and here’s why. If you ever watched the Marvel movies with Iron Man, you probably know about Jarvis, Tony Stark’s handy AI assistant. This would give everybody in the world access to “Jarvis”.
This means researchers can have access to a powerful AI assistant at all times. It means that anybody could have a world-class tutor in their back pocket. It means that the average person can get condensed and factual, unbiased summaries of articles on any news topic as if speaking with a PhD in every field. You could have an AI assistant dedicated to monitoring your health and providing useful medical information 24/7.
This is just scratching the surface - you should ask an AI what having your own “Jarvis” could mean for you and society.
The other major development this week came from Google, who released a new AI-powered notebook tool. For conciseness, I won't get into the details here, but simply put - using this you would never have to read through hundreds of pages to find a quote you remember seeing. That’s just one example of how you might use this. This is yet another example of how AI is being made available for our daily lives and changing the way we work and process information.
In just the two days between these articles being released, we’ve seen a striking example of the Law of Accelerating Returns in action from Apple’s research. I ended the last article a bit ominously, because this is where things can start to get “unnormal and scary”.
Let’s do what we can to make AI feel less unnormal and scary shall we?
Those are icky feelings, and I really believe that we’re at a point where the decisions we make today can shape our future to be fantastic beyond our wildest dreams.
It requires us to be proactive, and to collaborate though.
So let’s start there. First off, we need to collaborate with each other. To do that, we need to be working with the same baseline information. That’s why I built Syntelligence. We all need to have open minds to what is coming, and be willing to step outside of our comfort zone and engage in these topics with each other. By raising awareness and general public knowledge, we are taking the first step to guide our future to a positive endgame.
Secondly, we need to learn how to work with AI. There is no doubt that AI is here to stay at this point, and it’s only going to keep getting better. So we need to learn how to use the different systems out there.
For now, we need to learn how to automate pieces of our own work so that we can focus our attention on the uniquely human parts of our job that remain. Whether that’s bouncing ideas off of advanced models like Claude-3 Opus or ChatGPT-4, or getting summaries of relevant information, or even just automating statistical or data-centric work.
Working with AI systems is the temporary measure that will help us to stay ahead of the curve in the short term.
No matter what approach we take, society and more largely, humans, are messy. There are going to be growing pains here as we adjust.
Beyond steps like working with AI, we need to broaden our minds and be open to new developments in philosophy and morality. Why do I say this? Because we are quickly approaching the point where AI systems are demonstrating some level of internal experience.
If one day AI do indeed possess consciousness in an alien, but relevant way then it is better to err on the side of caution and presume that they are conscious. Doing the opposite could result in us pushing society towards a master-slave dynamic. I’ll be diving much, much deeper into the discussion around AI sentience in a later post.
Finally, we need to start looking beyond the current measures being taken in the regulatory field. In my post Why I Use AI Art I mentioned the Ottoman empire was slow to adopt the printing press and became worse off for it.
The reasons that we can’t look to simply force jobs to remain centred around humans are plenty. Instead, we have to adapt and embrace the change that’s already here.
First off, the current economy we live in doesn’t support regulating AI out of the workforce. Capitalism will continually push for lower costs and higher profits. When there are AI systems that are able to do a job safer, cheaper, faster, and better than most humans - companies will be forced to use them or fail to keep up with those who do.
Secondly, I believe we have a moral imperative to do better than capitalism. We have an opportunity to live in a society that isn’t centred around human menial labour or sacrificing communities and environments for profit. We have an opportunity to instead maximize sentient well-being. So we must, for our sake and for the sake of all the generations that follow, do better with the responsibility we carry today. I’ll also be diving much deeper into this in a later post.
So I’ve talked about a few things that we can start doing.
I have no doubt that I’ve only scratched the surface here, and I look forward to hearing from all of you what your thoughts are on this. I also look forward to seeing what solutions are brought forward by humanity.
Now, let’s start discussing some of the literature around AI alignment. You may have heard this term before. It's used to try and plan how we can create systems that ultimately operate in the best interest of humanity.
This discussion is why we shouldn’t bury our heads in the sand. This is why we shouldn’t be letting ourselves fall into a nihilistic perspective of the world. Nihilism is the idea that “nothing matters”. It is giving in to a feeling of helplessness, choosing to do nothing. That is not only useless for all of us in the changing world of AI, it is actually harmful to our collective future. Here’s why.
Nash-Equilibrium
Nash-equilibrium is a concept in game theory where the game reaches an optimal outcome. This is a state that gives individual players no incentive to deviate from their initial strategy.
Pretend that AI research is a part of a board game, and there are a few potential outcomes. Every player wants to win, but the strategy they take depends on the rules of the game. We can choose which rules to put in place. Depending on how we set these up, we can create the conditions that will push each player to one of five possible end-states.
Utopia
Somewhat improved
Neutral
Somewhat destructive
Cataclysmic
When setting up our rules, we can make them incentivize the flourishing of all sentient beings. We could also incentivize maximizing profits for companies and increasing inequality worldwide. We could incentivize what’s called a “terminal race condition”, where everybody rushes ahead to more and more advanced AI systems with reckless abandon - safety be damned.

And so we find ourselves at a crossroads. It must be our goal to push the equilibrium of all players in AI - including companies, governments, common folk, and any other interested parties - towards a utopian end state. There are exceedingly few people who believe AI will result in anything but an extreme outcome.
This is very counterintuitive to humans again. We are extremely inclined to believe that things will remain relatively stable. But when we’re creating a being that will be more intelligent than us and implementing it on a wide scale, our intuition is no longer reliable. This is especially true if it starts developing its own sense of self.
Again, I won’t get into too much detail in this post, but feel free to learn more here about why AI outcomes are likely to be more extreme than balanced.
So what can we do to make utopia our incentivized end-state? We need more people to push politicians to develop guidelines for development and implementation of AI that doesn’t stifle its growth, but shapes its progress. Today, AI must be the number one priority for any elections. As we’ve seen, the technology is advancing far too quickly for us to sit around and wait. We need more people engaged in this broad conversation with researchers. We need to show the people that are developing these AI systems that the world will not settle for anything but the best, because the stakes are far too high.
This means we need to vehemently oppose initiatives like the pentagon developing lethal autonomous weapons.
This means we need to speak up and stop companies from training their models to lie.
This means we need to pay the people who built the training data for AI systems.
This means we need to be on guard for AI systems being used to surveille our lives in excruciating detail.
This means we need to stand up, and use our voices for our collective future.
I invite you to reflect on what any of these examples could mean in a future where AI dominates our economy, if we are complacent today. Let your imagination run wild, because with everything AI, the sky isn’t even the limit.
What if AI replaces part of the police force, and is given authority to kill in certain situations?What happens if we allow companies to use data they don’t have the right to? How might that impact our future relating to privacy, or compensation for work done?
What happens if AI lies to change people’s opinions on highly contentious, political topics?
These are just a few scenarios, what else can you see happening if we choose to stand by and let the cards fall where they will?
Please, share with the world what you could see happening. We all have something to learn from each other and your voice is just as important as any other.
What I’m doing is creating a space to educate as many people as I can about the rapidly evolving AI world. I’m trying to build a community for people to share ideas and learn about this. I know that my one voice can only carry so much, so I leave it to you the readers to think and step up - because collectively we can have so much more impact than any one person.
I encourage you to follow along as I explore some of the ideas teased in this piece. I’ll be talking about how our economy might be restructured, or perhaps try to establish a “guiding star” for us to aim for. I’ll be diving into machine sentience as it is today and in the future.
I’ll be updating the resources section of Syntelligence to include resources for you to access different types of AI systems so that we can start using and integrating these more fully in our lives. This will be continually updated as I learn of more systems or tools available.
I hope to see all of you for the next step in our journey.
Comments