GPT-3 and "The New Fake Intelligence"
Plus: The Democratization of Simulation, Transnational Governments, and the Very First IRL Yak Meet-Up
In This Week’s Yak Talk:
The Democratization of Simulation
What Exactly Will Transnational Governments Look Like?
GPT-3 and “The New Fake Intelligence”
Plus: Yak Writings published this week.
The Democratization Of Simulation
By Matthew Sweet and Joseph Ensminger
From time immemorial, humans have conducted thought experiments concerning social systems. Some are lofty: a philosopher asking,“How would a society of benevolent humans that attempts to maximise collective wellbeing unfold?” Some are less so: a teen wondering, “How are my friends gonna react when they find out I crashed my truck?”
Until the mid 20th century, many of these experiments were confined to the realm of thought. But then simulation modelling was born. Suddenly, it was conceivable for digital prototypes to be stress-tested instead of physical products. Engineering became easier, cheaper, faster and less risky as a result.
Something else was realised: we could simulate the behaviour of complex adaptive systems. Agents within a model of a complex adaptive system could be assigned simple heuristics or decision-making rules. Then they could be allowed to interact with other agents, and within the sandbox of the model itself. Many of these simulations could then be run in parallel, allowing us to generate an otherwise unattainable tail distribution of outcomes.
Unfortunately, it wasn’t until the late 20th century that computational power began to catch up to such human ambition. When it did, the ability to model the behaviour of complex adaptive systems became a reality.
The ability to simulate thought experiments via agent-based modelling (and its cousin, multi-agent system modelling) opened a lot of doors. It allowed us to simulate the spread of epidemics, assess the impact of environmental contaminants in bodies of water, and chart the effects of rewilding schemes. It allowed us to model financial markets and explore possible consumer behaviours across diverse scenarios. It could also allow us to preempt unwittingly chaotic human behaviour...
ABM for MMOs
Software as an artefact is complicated, but the environments it is deployed and expected to function in are complex. One of the most complex is massively multiplayer online role-playing games (MMORPGs) like World of Warcraft (WoW). Some numbers on WoW:
Estimated map size: 60+ square miles
Estimated items: 100,000+
Estimated NPCs: 10,000+
Estimated PCs per server: 10-10,000+
Estimated quests: 29,000+
All these facets interact. This can be problematic as one of the dev team’s responsibilities is to predict and prevent glitches, bugs and errors. One way to do this is to have a large-scale beta-model release involving tens of thousands of players. However, even this reveals only a small percentage of all possible interactions.
But with ever-falling computation costs and a growing ability to deploy artificially-intelligent models, it is conceivable that ABM presents itself as a viable alternative to, if not a total replacement for, conventional beta testing. Similar to how stress-testing a digital prototype of a bridge is easier than building a real one for testing purposes, simulating 10,000 agent’s activity in a game’s world in order to find errors may be easier than co-ordinating 10,000 real players for beta-testing. And it may be more effective.
Simulations for All
Right now, this seems irrelevant. But, like many technological developments, it will change the world in unforeseen ways. Chris Anderson’s The Long Tail explored the democratization of the tools of production, the democratization of the tools of distribution, and the connection of supply and demand. The rise of the no-code ecology is currently democratizing the tools of development. What could the democratization and greater adoption of agent-based modelling and other simulation tools catalyse?
Short-term, the answers are interesting: greater understanding of post-wildfire regrowth and civil unrest or the parameters of diffusion necessary to reach the critical threshold. Long-term, the answers are unknown: humans-are-gonna-human and act both irrationally and unpredictably. This means that–in lieu of a sufficiently advanced model–we’ll just have to wait and see.
What Exactly Will Transnational Governments Look Like?
Geographically-bounded governments may have outlived their purpose. There’s a new form of governance that will be important moving forward that considers global citizenry.
Last week, during our weekly Yak online governance chat, we discussed the problem of “common knowledge” – that, is how do you share knowledge among highly distributed social groups. This is especially a problem in groups where chat membership is both highly transient and communication is ephemeral.
The most basic form of shared understanding are group norms, which are harder to build in virtual settings, especially as people come from many distinct cultures with conflicting norms.
For example, Yakshaver Jordan Peacock gave an example of living in Kuwait with a large expat community, where greeting norms were different based on where the person immigrated from originally. He ended up building a habit for deferring to others on the norm instead of initiating.
These kinds of conflicts will likely increase in scope as the global geopolitical landscape shifts, and nation states are overtaken by virtual communities. This idea was first introduced to me when I visited India in 2012, during a conversation I had with an uncle who had served in Indian government.
He explained that geographically bounded governments have outlived their usefulness and we’re now looking at an emergence of cross-border governments. I left the conversation utterly confused, but he later sent me this thought:
Think about multinational corporations, they are capable of ‘governing’ 100,000′s of people across borders through employment and millions more based on shaping their products in specific ways. Additionally, governments already heavily lean on companies as part of diplomatic relationships, e.g. look at how hackers attacking US companies is seen as a threat to US diplomatic relationships, foreign policy, AND national security.
Over the last few years, and especially after Trump’s election(not to mention this tumultuous year, 2020), I’ve been forced to rethink my understanding of governance, community relationships, and, most importantly, how companies must function or evolve to address the need for new types of organizations.
Recently, there’s been a ton of excitement around the idea of completely virtual organizations. One seemingly obvious question here is to ask, “Do we have all the necessary pieces to put together a new form of governance?”
A few months after I visited India, Balaji Srinivasan gave a talk called Silicon Valley’s Ultimate Exit at Startup School 2013:
Technology, Government role, Technology regulation
Essentially, his argument is as follows: the technologies in the left column will be replaced with digital protections like DRM.
One example is 3D printing, which is replacing traditional factories that were once protected by regulations.
Bitcoin is a similar case, where governments might use packet filters to reduce cross-border transactions instead of antiquated capital controls. In a nutshell, the use of force by a national government will eventually have to be replaced by a non-violent digital alternative, as there’s no practical way to apply force against a geographically-distributed virtual community, where membership can quickly move to different mediums.
Balaji suggests that we can escape politics by massaging the the laws of states with isolated “economic zones”, but this may be an incomplete solution. Though by trying to dissolve existing governance structures, communities find themselves MORE involved in political infrastructure development. Silicon Valley suffers from thinking democracy is the antithesis to progress while also advocating for more self-rule thus the tension that always exists in representative politics. Their prescribed ideas always fall to Benevelont Dictatorships instead of expanding self-rule. Though I think that moving in the direction of using open source software and virtual communities composed of different governance hiearchies will actually help move online worlds more towards the American ideal of self-rule while also solving for tragedy of commons.
One wonders how this evolution will occur. It wouldn’t be surprising to see it begin with new flash movements, widespread creation of economic zones, minicountries, and new political parties, which are “all-global” by design. Currently, it’s difficult to tell how the new politics will evolve.
This column will continue weekly and will explore a variety of topics in “Online Governance” sphere, using the Yak Collective’s weekly Online Governance chat(Friday at 12p EDT in Yak Discord channel) as a jumping-off point to explore this topic.
GPT-3 and “The New Fake Intelligence”
By Alex Wagner
"For me, the big story about #gpt3 is not that it is smart - it is dumb as a pile of rocks - but that piles of rocks can do many things we thought you needed to be smart for. Fake intelligence may be dominant over real intelligence in many domains."
– Anders Sandberg, Senior Research Fellow at Oxford University
OpenAI, the AI research lab that created GPT-3, was founded in 2015 by Elon Musk and others. In 2018, Musk left OpenAI’s board in Feb. 2018 over a conflict regarding the company’s direction. Last year, Microsoft invested a billion dollars into the company.
OpenAI’s charter commits to the premise that their products and research are used for the benefit of humanity. The company has a board that includes David D’Angelo, and Reid Hoffman. The company’s CEO is Sam Altman, notable tech-Twitter personality and former president of Y-Combinator.
Last month, OpenAI announced its more-powerful successor, GPT-3, trained on a much larger set of data (an archive of the web called the Common Crawl). This makes it 116x more powerful than it’s predecessor, GPT-2, which was initially hyped by media outlets as “an AI too dangerous to release”(this was later debunked).
GPT-3 has generated it’s own unbridled fervor with technologists.
"The GPT-3 hype is way too much. It’s impressive (thanks for the nice compliments!) but it still has serious weaknesses and sometimes makes very silly mistakes. AI is going to change the world, but GPT-3 is just a very early glimpse. We have a lot still to figure out. "
If you’re online at all, Twitter especially, you may be wondering why people are raving so wildly about GPT-3.
Here’s a brief list of notable projects and experiments with GPT-3:
Nick Cammarata’s gushing tweet threads about GPT-3 as an a “better-than-human” therapist.
Kevin Lacker’s blog post “Giving GPT-3 a Turing Test” showcases the author’s findings with regards to how well GPT-3 can fake being a human, and is a short, enjoyable read.
Sharif Shameem created a program with GPT-3 that can auto-generate JSX code for web pages, by using plain english to describe the desired layout.
Gwern’s exhaustive list of “GPT-3 Creative Fiction” experiments oscillate between academic and opaque to legitimately weird, and often funny.
OpenAI’s previous projects are worth checking out, as well.
Probably the most earthy take on GPT-3 comes from Julian Togelius, Associate Professor researching AI at NYU:
"GPT-3 often performs like a clever student who hasn't done their reading trying to bullshit their way through an exam. Some well-known facts, some half-truths, and some straight lies, strung together in what first looks like a smooth narrative."
Interestingly, it appears that one major weakness of GPT-3 is that it can’t tell you when it doesn’t know something.
GPT-3 can “think”, in that it will often pause to factor in new input. It “learns” in an unprecedented way from user input, and therefore, can bullshit, magically creating something out of seemingly nothing(read: 1.75 Billion parameters).
Gwern equates this ability with the creation of a whole new kind of programming:
All hail the New Fake Intelligence.
The quotes were pulled from Lamda Labs’ excellent post, “A Hitchhikers Guide to GPT-3”
For a longer, more complete survey of the most interesting critiques and uses of GPT-3, check out Kaj Sotala’s “thread of threads” on the subject.
Last Sunday was the first instance of an irl Yak Collective get-together. Shout out to Sterling Proffer, Nathan Snyder, and Varun Adibhatla, for being the first Yaks to gather in meatspace, taking a Sunday stroll the NYC’s Prospect Park last week.
Tom Critchlow’s piece on his Indie Consulting Business Model Canvas v0.1. Tom hosted an incredible workshop on this in Discord and Figma. Registrations for his Discord channel are closed for now, but may reopen in the future.
Vaughn Tan’s post, “Agathonicity”, on designing things worth keeping.
Apply to become a Yak here.
Interested in hiring the Yak Collective? Send a message to firstname.lastname@example.org.
The Yak Talk team for this weeks edition is: Alex Wagner, Grigori Milov, Shreeda Segan, Praful Mathur, and Matthew Sweet.