Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AI Safety Study Group #93

Open
wdoug opened this issue Aug 6, 2018 · 2 comments
Open

AI Safety Study Group #93

wdoug opened this issue Aug 6, 2018 · 2 comments
Assignees
Labels

Comments

@wdoug
Copy link
Member

wdoug commented Aug 6, 2018

Why

Machine Learning and Artificial Intelligence are already extremely powerful tools that are having a massive impact on society. These tools are only becoming more powerful and in the future could potentially even lead to Artificial General Intelligence (AGI) or Superintelligence. While there is amazing potential for these technologies to positively impact the future there is also a huge risk that they could have an enormous negative impact (including possible extinction) either through accidental misuse or through malicious intent. Meanwhile, despite the potential risks, AI Safety is a very neglected area of research which means that any work done on the topic can have an outsized impact. Taking this into account, 80,000 Hours estimates that working on AI Safety is one of the most effective ways to have a positive impact on the world.

What is the idea

I'd like to start a study group to learn more about the issues and see if we might be able to contribute to any of the directions of research.

Potential Partner(s)?

Additional Background Context

Further reading:

Details

Champion: [To be filled out during exploration stage]
Repo: [To be filled out during exploration stage]
Waffle Board: [To be filled out during exploration stage]

@wdoug wdoug added the defined label Aug 6, 2018
@tylerperkins
Copy link

tylerperkins commented Aug 7, 2018

I just watched an excellent talk about these issues and the state of the art given by Max Tegmark (MIT) just three weeks ago. It's Max Tegmark - How Far Will AI Go? Intelligible Intelligence & Beneficial Intelligence. He proposes three goals to "win the wisdom race":

  • Invest in AI safety research
  • Ensure that AI-generated wealth makes everyone better off
  • Ban lethal autonomous weapons

He also mentions an excellent resource, Victoria Krakovna's page, Deep Safety.

@wdoug wdoug self-assigned this Dec 14, 2018
@provmusic
Copy link

I would be very interested in joining a conversation centered around this. I've spent a quite a bit of time pondering this space. My current peak area of interest is the effects Ai can, and is currently having on culture.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants