Draft AI Policy Points
This is a draft of policy points I put together for the US Transhumanist Party’s nominee for president in 2020.
Problem 1 - mass unemployment from automation & AI
Policy solutions:
- UBI or negative income tax to prevent poverty trap and allow people to get back on their feet. Fund by the federal land dividend and by shifting money out of defense.
- Invest in skills retraining programs
Subproblem 1.1. - Increasing rates of depression from loss of purpose, feeling of being an "underclass" relative to the "tech elite", leading to suicide & more opiod use.
- Encourage people to pursue creative work or volunteer.
- Focus more on mental health and wellbeing instead of GDP.
Problem 2 - Lethal autonomous weapons
Subproblem 2.1 - Threat of expensive & pointless AI arms race
Policy solutions:
- International treaties
Subproblem 2.2. - Threat of LAWS tech falling into hands of terrorist organizations and malicious state actors
Policy solutions:
- DoD research on countermeasures for drones and other autonomous attack vectors.
- Ethical use of surveillance technology to monitor threats while ensuring privacy
Problem 3 - US needs talent to compete on AI for national defense and economic development
Policy solutions:
- Make DoD jobs more appealing to young people. People should not be forced to work on weapons systems, instead they obtain jobs which are limited to AI safety and defense-only systems in DoD.
- Make it easier for highly skilled individuals immigrate and/or stay in the US after PhDs or postdocs.
- Invest in education - coding should be a mandatory subject in all schools.
Problem 4 (long term) - threat of AGI "takeoff" / "singularity" / "paperclip maximizer"
Subproblem 4.2 - threat of "unipolar" outcome where one country obtains "AI supremacy". (ie China taking over the world with highly advanced AI)
Policy solutions:
- Programs to maintain national competitiveness and prevent the ongoing IP theft from China.
- Encourage Initiatives like SingularityNET and OpenAI to decentralize and distribute AI innovations widely, to prevent global domination by an AI "singleton".
Subproblem 4.2 - near term AI safety
There is growing realization that current AI systems are susceptible to adversarial attacks and can be easily hacked.
Policy solutions:
- Invest in research on robustness, interpretability, and AI bias.
- Get more smart technical people involved in regulation of AI systems where human lives are at stake such as AI diagnostic systems or driverless cars. Currently most AI experts are going to work in the Bay Area (or perhaps New York City) and very few are going to regulatory agencies in Washington, DC. One solution is for the FDA and other regulatory agencies to set up offices in Silicon Valley.
Subproblem 4.2.1 How do we regulate dynamic AIs?
Companies would like to have their AI systems "learn on the fly" and be continually improving. However, right now regulatories agencies (such as the FDA) will only approve one static version of the system, and if it is modified or updated in any way, companies have to go through a long process to get it approved again. More generally, there is a larger technical question about how much we can trust AI systems that are dynamic. This is especially crucial when considering future systems that don't just retrain weights in model but can also rewrite and expand their own code.
Subproblem 4.4. - threat of unsafe / unaligned AGI destroying humanity as a side effect
Policy solution -
- Invest in research on AI Safety. Establish long term funding streams for long term, visionary research in AI safety and making AI aligned with human values. Pursue a "portfolio" of AI safety research including mathematical / decision theory research (such as done at the Machine Intelligence Research Institute), inverse reinforcement learning approaches (such as pursued at Berkeley Center for Human Compatible AI), research on utilitarian ethics (Future of Humanity Institute at Oxford), and finally the BCI approach advocated by Elon Musk.
Problem 5- AI not progressing fast enough due to technopessimism
We don't want to be too "doom and gloom" about AI. We need to accelerate AI progress because AI will be essential for figuring out how to stop and reverse aging, target and destroy cancer, and cure all forms of disease. Great economic gains will be had from AI, and we don't want "bad PR" to slow progress.