Skip to content

Commit

Permalink
Update
Browse files Browse the repository at this point in the history
  • Loading branch information
samholt committed Apr 22, 2024
1 parent 264afbf commit 59a0615
Show file tree
Hide file tree
Showing 5 changed files with 50 additions and 278 deletions.
49 changes: 0 additions & 49 deletions docs/api-examples.md

This file was deleted.

96 changes: 21 additions & 75 deletions docs/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,91 +3,37 @@ Our vision empowers users to achieve 10x more with AI. The LLM Automatic Compute
### Convenient Link for Sharing this Document:

```
- FAQ https://github.com/geekan/MetaGPT/blob/main/docs/FAQ-EN.md
- MetaGPT-Index/FAQ-CN https://deepwisdom.feishu.cn/wiki/MsGnwQBjiif9c3koSJNcYaoSnu4
- FAQ https://github.com/samholt/L2MAC/blob/master/docs/faq.md
```

### Link

1. Code:https://github.com/geekan/MetaGPT
2. Roadmap:https://github.com/geekan/MetaGPT/blob/main/docs/ROADMAP.md
3. EN
1. Demo Video: [MetaGPT: Multi-Agent AI Programming Framework](https://www.youtube.com/watch?v=8RNzxZBTW8M)
2. Tutorial: [MetaGPT: Deploy POWERFUL Autonomous Ai Agents BETTER Than SUPERAGI!](https://www.youtube.com/watch?v=q16Gi9pTG_M&t=659s)
3. Author's thoughts video(EN): [MetaGPT Matthew Berman](https://youtu.be/uT75J_KG_aY?si=EgbfQNAwD8F5Y1Ak)
4. CN
1. Demo Video: [MetaGPT:一行代码搭建你的虚拟公司\_哔哩哔哩\_bilibili](https://www.bilibili.com/video/BV1NP411C7GW/?spm_id_from=333.999.0.0&vd_source=735773c218b47da1b4bd1b98a33c5c77)
1. Tutorial: [一个提示词写游戏 Flappy bird, 比AutoGPT强10倍的MetaGPT,最接近AGI的AI项目](https://youtu.be/Bp95b8yIH5c)
1. Author's thoughts video(CN): [MetaGPT作者深度解析直播回放\_哔哩哔哩\_bilibili](https://www.bilibili.com/video/BV1Ru411V7XL/?spm_id_from=333.337.search-card.all.click)
1. Code:https://github.com/samholt/l2mac/
2. Roadmap:https://github.com/samholt/L2MAC/blob/master/docs/roadmap.md

### How to become a contributor?
### How do I become a contributor?

1. Choose a task from the Roadmap (or you can propose one). By submitting a PR, you can become a contributor and join the dev team.
2. Current contributors come from backgrounds including ByteDance AI Lab/DingDong/Didi/Xiaohongshu, Tencent/Baidu/MSRA/TikTok/BloomGPT Infra/Bilibili/CUHK/HKUST/CMU/UCB
2. Current contributors come from backgrounds including Oxford/Cambridge Universities and companies.

### Chief Evangelist (Monthly Rotation)
### Become the Chief Evangelist at for the L2MAC Community

MetaGPT Community - The position of Chief Evangelist rotates on a monthly basis. The primary responsibilities include:
Join us as the Chief Evangelist, a dynamic role that changes hands every month, fueling continuous innovation and fresh ideas within our community. Here's what you'll do:

- **Community Leadership and Support:** Take charge of maintaining essential community resources such as FAQ documents, announcements, and GitHub READMEs. Ensure that every community member has the information they need to thrive.
- **Rapid Response:** Act as the first point of contact for community inquiries. Your goal will be to respond to questions on platforms like GitHub Issues and Discord within 30 minutes, ensuring our community remains informed and engaged.
- **Foster Positive Engagement:** Cultivate an environment that is not only enthusiastic and genuine but also welcoming. We aim to make every member feel valued and supported.
- **Encourage Active Participation:** Inspire community members to contribute to projects that push the boundaries towards achieving tools to 10x people's work productivity. Your encouragement will help harness the collective expertise and passion of our community.
- **Event Coordination (Optional):** Have a flair for event planning? You can choose to organize small-scale events, such as hackathons, which are crucial for sparking innovation and collaboration within the community.

**Why Join Us?**

This role offers the unique opportunity to be at the forefront of the AI revolution, engage with like-minded individuals, and play a pivotal part in steering our community towards significant contributions in the field of AGI. If you are passionate about AI, eager to help others, and ready to lead, the Chief Evangelist position is your platform to shine and make an impact. Interested applicants for the position, please email `sih31 (at) cam.ac.uk`.

1. Maintaining community FAQ documents, announcements, and Github resources/READMEs.
2. Responding to, answering, and distributing community questions within an average of 30 minutes, including on platforms like Github Issues, Discord and WeChat.
3. Upholding a community atmosphere that is enthusiastic, genuine, and friendly.
4. Encouraging everyone to become contributors and participate in projects that are closely related to achieving AGI (Artificial General Intelligence).
5. (Optional) Organizing small-scale events, such as hackathons.

### FAQ

1. Code truncation/ Parsing failure:
1. Check if it's due to exceeding length. Consider using the gpt-4-turbo-preview or other long token versions.
2. Success rate:
1. There hasn't been a quantitative analysis yet, but the success rate of code generated by gpt-4-turbo-preview is significantly higher than that of gpt-3.5-turbo.
3. Support for incremental, differential updates (if you wish to continue a half-done task):
1. There is now an experimental version. Specify `--inc --project-path "<path>"` or `--inc --project-name "<name>"` on the command line and enter the corresponding requirements to try it.
4. Can existing code be loaded?
1. We are doing this, but it is very difficult, especially when the project is large, it is very difficult to achieve a high success rate.
5. Support for multiple programming languages and natural languages?
1. It is now supported, but it is still in experimental version
6. Want to join the contributor team? How to proceed?
1. Merging a PR will get you into the contributor's team. The main ongoing tasks are all listed on the ROADMAP.
7. PRD stuck / unable to access/ connection interrupted
1. The official openai base_url address is `https://api.openai.com/v1`
2. If the official openai base_url address is inaccessible in your environment (this can be verified with curl), it's recommended to configure using base_url to other "reverse-proxy" provider such as openai-forward. For instance, `openai base_url: "``https://api.openai-forward.com/v1``"`
3. If the official openai base_url address is inaccessible in your environment (again, verifiable via curl), another option is to configure the llm.proxy in the `config2.yaml`. This way, you can access the official openai base_url via a local proxy. If you don't need to access via a proxy, please do not enable this configuration; if accessing through a proxy is required, modify it to the correct proxy address.
4. Note: OpenAI's default API design ends with a v1. An example of the correct configuration is: `base_url: "https://api.openai.com/v1"
8. Get reply: "Absolutely! How can I assist you today?"
1. Did you use Chi or a similar service? These services are prone to errors, and it seems that the error rate is higher when consuming 3.5k-4k tokens in GPT-4
9. What does Max token mean?
1. It's a configuration for OpenAI's maximum response length. If the response exceeds the max token, it will be truncated.
10. How to change the investment amount?
11. You can view all commands by typing `metagpt --help`
12. Which version of Python is more stable?
13. python3.9 / python3.10
14. Can't use GPT-4, getting the error "The model gpt-4 does not exist."
15. OpenAI's official requirement: You can use GPT-4 only after spending $1 on OpenAI.
16. Tip: Run some data with gpt-3.5-turbo (consume the free quota and $1), and then you should be able to use gpt-4.
17. Can games whose code has never been seen before be written?
18. Refer to the README. The recommendation system of Toutiao is one of the most complex systems in the world currently. Although it's not on GitHub, many discussions about it exist online. If it can visualize these, it suggests it can also summarize these discussions and convert them into code. The prompt would be something like "write a recommendation system similar to Toutiao". Note: this was approached in earlier versions of the software. The SOP of those versions was different; the current one adopts Elon Musk's five-step work method, emphasizing trimming down requirements as much as possible.
19. Under what circumstances would there typically be errors?
20. More than 500 lines of code: some function implementations may be left blank.
21. When using a database, it often gets the implementation wrong — since the SQL database initialization process is usually not in the code.
22. With more lines of code, there's a higher chance of false impressions, leading to calls to non-existent APIs.
23. An error occurred during installation: "Another program is using this file...egg".
24. Delete the file and try again.
25. Or manually execute`pip install -r requirements.txt`
26. The origin of the name MetaGPT?
27. The name was derived after iterating with GPT-4 over a dozen rounds. GPT-4 scored and suggested it.
28. openai.error.RateLimitError: You exceeded your current quota, please check your plan and billing details
29. If you haven't exhausted your free quota, set RPM to 3 or lower in the settings.
30. If your free quota is used up, consider adding funds to your account.
31. What does "borg" mean in n_borg?
32. [Wikipedia borg meaning ](https://en.wikipedia.org/wiki/Borg)
33. The Borg civilization operates based on a hive or collective mentality, known as "the Collective." Every Borg individual is connected to the collective via a sophisticated subspace network, ensuring continuous oversight and guidance for every member. This collective consciousness allows them to not only "share the same thoughts" but also to adapt swiftly to new strategies. While individual members of the collective rarely communicate, the collective "voice" sometimes transmits aboard ships.
34. How to use the Claude API?
35. The full implementation of the Claude API is not provided in the current code.
36. You can use the Claude API through third-party API conversion projects like: https://github.com/jtsang4/claude-to-chatgpt
37. Is Llama2 supported?
38. On the day Llama2 was released, some of the community members began experiments and found that output can be generated based on MetaGPT's structure. However, Llama2's context is too short to generate a complete project. Before regularly using Llama2, it's necessary to expand the context window to at least 8k. If anyone has good recommendations for expansion models or methods, please leave a comment.
39. `mermaid-cli getElementsByTagName SyntaxError: Unexpected token '.'`
40. Upgrade node to version 14.x or above:
1. `npm install -g n`
2. `n stable` to install the stable version of node(v18.x)
1. Code tests are failing due to an import error:
1. At present any package the LLM agent tries to use must already be installed on your current virtualenv that you are running L2MAC from. Therefore, it is best to find out which packages L2MAC is trying to use and install them, and or the specific package version of them. We plan to fix this in the future with self-created virtualenvs per codebase generation; see the [roadmap](roadmap)](roadmap), and welcome contributions on this.
2. Want to join the contributor team? How to proceed?
1. Merging a PR will get you into the contributor's team. The main ongoing tasks are all listed on the [roadmap](roadmap).
4 changes: 3 additions & 1 deletion docs/guide/tutorials/l2mac_101.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,9 @@ Increasing the number of steps within the prompt program plan that the LLM agent

When generating code for a codebase, a user can optionally specify to run self-generated unit tests that are automatically generated alongside the codebase. This also includes running a static code analyzer to check for any invalid code, and any errors regarding either unit tests or invalid code are iterated with the LLM agent to fix them before continuing onto the next prompt-program instruction step to complete. For coding tasks this can significantly improve the quality of the output, especially for coding tasks that involve complex integrations or tricky logic to implement. For example, you can set this to `run_tests=True`.

> Please note, when running code tests, you must ensure that the packages that the LLM tries to use within the code it is running are already installed, and errors will be thrown if packages are missing. You can iterate with the outputs, to install any packages it is trying to use by checking its generated `requirements.txt` file and installing any missing packages, or package versions.
::: tip
Please note, when running code tests, you must ensure that the packages that the LLM tries to use within the code it is running are already installed, and errors will be thrown if packages are missing. You can iterate with the outputs, to install any packages it is trying to use by checking its generated `requirements.txt` file and installing any missing packages, or package versions.
:::

### Advanced custom parameters

Expand Down
85 changes: 0 additions & 85 deletions docs/markdown-examples.md

This file was deleted.

Loading

0 comments on commit 59a0615

Please sign in to comment.