You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Machine Learning and Artificial Intelligence are already extremely powerful tools that are having a massive impact on society. These tools are only becoming more powerful and in the future could potentially even lead to Artificial General Intelligence (AGI) or Superintelligence. While there is amazing potential for these technologies to positively impact the future there is also a huge risk that they could have an enormous negative impact (including possible extinction) either through accidental misuse or through malicious intent. Meanwhile, despite the potential risks, AI Safety is a very neglected area of research which means that any work done on the topic can have an outsized impact. Taking this into account, 80,000 Hours estimates that working on AI Safety is one of the most effective ways to have a positive impact on the world.
What is the idea
I'd like to start a study group to learn more about the issues and see if we might be able to contribute to any of the directions of research.
Potential Partner(s)?
For local contacts, we might want to reach out to:
Champion: [To be filled out during exploration stage] Repo: [To be filled out during exploration stage] Waffle Board: [To be filled out during exploration stage]
The text was updated successfully, but these errors were encountered:
I would be very interested in joining a conversation centered around this. I've spent a quite a bit of time pondering this space. My current peak area of interest is the effects Ai can, and is currently having on culture.
Why
Machine Learning and Artificial Intelligence are already extremely powerful tools that are having a massive impact on society. These tools are only becoming more powerful and in the future could potentially even lead to Artificial General Intelligence (AGI) or Superintelligence. While there is amazing potential for these technologies to positively impact the future there is also a huge risk that they could have an enormous negative impact (including possible extinction) either through accidental misuse or through malicious intent. Meanwhile, despite the potential risks, AI Safety is a very neglected area of research which means that any work done on the topic can have an outsized impact. Taking this into account, 80,000 Hours estimates that working on AI Safety is one of the most effective ways to have a positive impact on the world.
What is the idea
I'd like to start a study group to learn more about the issues and see if we might be able to contribute to any of the directions of research.
Potential Partner(s)?
Additional Background Context
Further reading:
Details
Champion: [To be filled out during exploration stage]
Repo: [To be filled out during exploration stage]
Waffle Board: [To be filled out during exploration stage]
The text was updated successfully, but these errors were encountered: