Releases: microsoft/FLAML
v2.0.1
This release contains prompt improvement and bug fix. In the next version, we will rename ResponsiveAgent to ConversableAgent.
Thanks @kevin666aa for the contribution, and @skzhang1 @LittleLittleCloud @JieyuZ2 @gagb for reviewing.
What's Changed
- Cover function calls with no arguments by @kevin666aa in #1185
- fix generate_reply when sender is None. by @kevin666aa in #1186
- prompt improvement by @sonichi in #1188
- document response fields by @sonichi in #1199
Full Changelog: v2.0.0...v2.0.1
v2.0.0
Prepare for a roller coaster ride of innovation with the launch of FLAML v2.0.0! This is not just another update but a culmination of numerous enhancements, novel features, and exciting improvements we've made from v2.0.0rc1 to v2.0.0rc5, leading to the grand v2.0.0 release.
- With v2.0.0rc1, we embarked on a major refactor with the creation of an [automl] option to declutter dependencies for
autogen
andtune
. - In v2.0.0rc2, we supercharged FLAML with support for new OpenAI gpt-3.5-turbo and gpt-4 models in
autogen
and rolled out the extensibility of autogen agents. - With v2.0.0rc3, we upped the ante by adding new OpenAI models' support of functions in agents and provided a handy code example in a dedicated notebook.
- v2.0.0rc4 brought a host of improvements to the
agentchat
framework, enabling many new applications. - v2.0.0rc5 pushed the boundaries further by making auto-reply methods pluggable and supporting an asynchronous mode in agents.
Finally, we arrive at the grand v2.0.0 release! This version boasts of numerous feature enhancements in autogen
, like multi-agent chat framework (in preview), expanded OpenAI model support, enhanced integration with Spark, and much more.
Documentation for AutoGen: https://microsoft.github.io/FLAML/docs/Use-Cases/Autogen
Examples: https://microsoft.github.io/FLAML/docs/Examples/AutoGen-AgentChat
Blogposts: https://microsoft.github.io/FLAML/blog
A huge shoutout to @qingyun-wu @kevin666aa @skzhang1 @ekzhu @BeibinLi @thinkall @LittleLittleCloud @JieyuZ2 @gagb @EgorKraevTransferwise @markharley @int-chaos @levscaut @feiran-jia @liususan091219 @royninja @pcdeadeasy as well as our new contributors @badjouras, @LeoLjl, @xiaoboxia, and @minghao51 who joined us during this journey. Your contributions have played a pivotal role in shaping this release.
What's Changed
- Blogpost for adaptation in HumanEval by @sonichi in #1048
- Improve messaging in documentation by @sonichi in #1050
- create an automl option to remove unnecessary dependency for autogen and tune by @sonichi in #1007
- docs: 📝 Fix link to installation section in Task-Oriented-AutoML.md by @badjouras in #1051
- doc and test update by @sonichi in #1053
- remove redundant doc and add tutorial by @qingyun-wu in #1004
- add agent notebook and documentation by @qingyun-wu in #1052
- Support more azure openai api_type by @thinkall in #1059
- suppress warning message of pandas_on_spark to_spark by @thinkall in #1058
- Agent notebook example with human feedback; Support shell command and multiple code blocks; Improve the system message for assistant agent; Improve utility functions for config lists; reuse docker image by @sonichi in #1056
- Fix documentation by @sonichi in #1075
- encode timeout msg in bytes by @sonichi in #1078
- Add pands requirement in benchmark option by @qingyun-wu in #1070
- Fix pyspark tests in workflow by @thinkall in #1071
- Docmentation for agents by @qingyun-wu in #1057
- Links to papers by @sonichi in #1084
- update openai model support by @sonichi in #1082
- string to array by @sonichi in #1086
- Factor out time series-related functionality into a time series Task object by @EgorKraevTransferwise in #989
- An agent implementation of MathChat by @kevin666aa in #1090
- temp solution for joblib 1.3.0 issue by @thinkall in #1100
- support string alg in tune by @skzhang1 in #1093
- update flaml version in MathChat notebook by @kevin666aa in #1095
- doc update by @sonichi in #1089
- Update OptunaSearch by @skzhang1 in #1106
- Support function_call in
autogen/agent
by @kevin666aa in #1091 - update notebook with new models by @sonichi in #1112
- Enhance Integration with Spark by @levscaut in #1097
- Add Funccall notebook and document by @kevin666aa in #1110
- Update docstring for oai.completion. by @LeoLjl in #1113
- Try to prevent the default AssistantAgent from asking users to modify the code by @sonichi in #1114
- update colab link by @sonichi in #1118
- fix bug in math_user_proxy_agent by @kevin666aa in #1124
- Add log metric by @thinkall in #1125
- Update assistant agent by @sonichi in #1121
- suppress printing data split type by @xiaoboxia in #1126
- change price ratio by @sonichi in #1130
- simplify the initiation of chat by @sonichi in #1131
- Update docs on how to interact with local LLM by @LeoLjl in #1128
- Json config list, agent refactoring and new notebooks by @sonichi in #1133
- unify auto_reply; bug fix in UserProxyAgent; reorg agent hierarchy by @sonichi in #1142
- rename GenericAgent -> ResponsiveAgent by @sonichi in #1146
- Bump semver from 5.7.1 to 5.7.2 in /website by @dependabot in #1119
- autogen.agent -> autogen.agentchat by @sonichi in #1148
- MathChat blog post by @kevin666aa in #1096
- Commenting use_label_encoder - xgboost by @minghao51 in #1122
- raise error when msg is invalid; fix docstr; improve ResponsiveAgent; update doc and packaging; capture ipython output; configurable default reply by @sonichi in #1154
- consecutive auto reply, history, template, group chat, class-specific reply by @sonichi in #1165
- Improve auto reply registration by @sonichi in #1170
- Make auto reply method pluggable by @sonichi in #1177
- support async in agents by @sonichi in #1178
- Updated README.md with installation Link by @royninja in #1180
- Add RetrieveChat by @thinkall in #1158
- silent; code_execution_config; exit; version by @sonichi in #1179
New Contributors
- @badjouras made their first contribution in #1051
- @kevin666aa made their first contribution in #1090
- @LeoLjl made their first contribution in #1113
- @xiaoboxia made their first contribution in #1126
- @minghao51 made their first contribution in #1122
Full Changelog: v1.2.4...v2.0.0
v2.0.0rc5
This version makes auto-reply methods pluggable and supports asynchronous mode in agents. An example of handling data steams is added.
Thanks to @qingyun-wu @ekzhu for laying the foundation and reviewing!
What's Changed
Full Changelog: v2.0.0rc4...v2.0.0rc5
v2.0.0rc4
This pre-release makes lots of improvements in the agentchat framework. Many new applications are enabled.
Thanks @JieyuZ2 @gagb @thinkall @BeibinLi @ekzhu @LittleLittleCloud @kevin666aa @qingyun-wu @LeoLjl and others for your contributions!
What's Changed
- update colab link by @sonichi in #1118
- fix bug in math_user_proxy_agent by @kevin666aa in #1124
- Add log metric by @thinkall in #1125
- Update assistant agent by @sonichi in #1121
- suppress printing data split type by @xiaoboxia in #1126
- change price ratio by @sonichi in #1130
- simplify the initiation of chat by @sonichi in #1131
- Update docs on how to interact with local LLM by @LeoLjl in #1128
- Json config list, agent refactoring and new notebooks by @sonichi in #1133
- unify auto_reply; bug fix in UserProxyAgent; reorg agent hierarchy by @sonichi in #1142
- rename GenericAgent -> ResponsiveAgent by @sonichi in #1146
- Bump semver from 5.7.1 to 5.7.2 in /website by @dependabot in #1119
- autogen.agent -> autogen.agentchat by @sonichi in #1148
- MathChat blog post by @kevin666aa in #1096
- Commenting use_label_encoder - xgboost by @minghao51 in #1122
- raise error when msg is invalid; fix docstr; improve ResponsiveAgent; update doc and packaging; capture ipython output; configurable default reply by @sonichi in #1154
- consecutive auto reply, history, template, group chat, class-specific reply by @sonichi in #1165
- Improve auto reply registration by @sonichi in #1170
New Contributors
- @xiaoboxia made their first contribution in #1126
- @minghao51 made their first contribution in #1122
Full Changelog: v2.0.0rc3...v2.0.0rc4
v2.0.0rc3
Highlights
Added new OpenAI models' support of functions in agents. Thanks to @kevin666aa, @sonichi and @qingyun-wu
Please find a code example in this notebook: https://github.com/microsoft/FLAML/blob/main/notebook/autogen_agent_function_call.ipynb
What's Changed
- temp solution for joblib 1.3.0 issue by @thinkall in #1100
- support string alg in tune by @skzhang1 in #1093
- update flaml version in MathChat notebook by @kevin666aa in #1095
- doc update by @sonichi in #1089
- Update OptunaSearch by @skzhang1 in #1106
- Support function_call in
autogen/agent
by @kevin666aa in #1091 - update notebook with new models by @sonichi in #1112
- Enhance Integration with Spark by @levscaut in #1097
- Add Funccall notebook and document by @kevin666aa in #1110
- Update docstring for oai.completion. by @LeoLjl in #1113
- Try to prevent the default AssistantAgent from asking users to modify the code by @sonichi in #1114
New Contributors
Full Changelog: v2.0.0rc2...v2.0.0rc3
v2.0.0rc2
Highlights
- Support new OpenAI gpt-3.5-turbo and gpt-4 models in
autogen
. Thanks to @gagb @kevin666aa @qingyun-wu @ekzhu @BeibinLi . - MathChat implemented with
autogen.agents
. Thanks to @kevin666aa @qingyun-wu. - Time-series related functionality in
automl
is factored out. Thanks to @EgorKraevTransferwise .
Thanks to all the contributors and reviewers @thinkall @qingyun-wu @EgorKraevTransferwise @kevin666aa @liususan091219 @skzhang1 @jtongxin @pcdeadeasy @markharley @int-chaos !
What's Changed
- Fix documentation by @sonichi in #1075
- encode timeout msg in bytes by @sonichi in #1078
- Add pands requirement in benchmark option by @qingyun-wu in #1070
- Fix pyspark tests in workflow by @thinkall in #1071
- Docmentation for agents by @qingyun-wu in #1057
- Links to papers by @sonichi in #1084
- update openai model support by @sonichi in #1082
- string to array by @sonichi in #1086
- Factor out time series-related functionality into a time series Task object by @EgorKraevTransferwise in #989
- An agent implementation of MathChat by @kevin666aa in #1090
New Contributors
- @kevin666aa made their first contribution in #1090
Full Changelog: 2.0.0rc1...v2.0.0rc2
2.0.0rc1
This release includes:
- A Major Refactor: the creation of an
automl
option to remove unnecessary dependencies forautogen
andtune
(thanks to @sonichi.) - A newly added blog post addressing adaptation in HumanEval (thanks to @sonichi.)
- A newly added
tutorials
folder containing all the tutorials on FLAML (thanks to @qingyun-wu, @sonichi, and @thinkall.) - Documentation Improvement and Link Corrections.
- The addition of documentation and a notebook example on interactive LLM agents in FLAML (thanks to @qingyun-wu, @sonichi, @thinkall, and @pcdeadeasy.)
- Support more azure openai api_type (thanks to @thinkall )
- Suppress warning message of pandas_on_spark to_spark (thanks to @thinkall )
- Support shell command and multiple code blocks (thanks to @sonichi )
- Improve the system message for assistant agent (thanks to @sonichi and @gagb )
- Improve utility functions for config lists (thanks to @sonichi )
- Reuse docker image in a session (thanks to @sonichi and @gagb )
A hearty welcome to our new contributor, @badjouras, who made their first contribution. Thanks to code reviewers @gagb @pcdeadeasy
@liususan091219 @thinkall @levscaut @sonichi @qingyun-wu.
What's Changed
- Blogpost for adaptation in HumanEval by @sonichi in #1048
- Improve messaging in documentation by @sonichi in #1050
- create an automl option to remove unnecessary dependency for autogen and tune by @sonichi in #1007
- docs: 📝 Fix link to installation section in Task-Oriented-AutoML.md by @badjouras in #1051
- doc and test update by @sonichi in #1053
- remove redundant doc and add tutorial by @qingyun-wu in #1004
- add agent notebook and documentation by @qingyun-wu in #1052
- Support more azure openai api_type by @thinkall in #1059
- suppress warning message of pandas_on_spark to_spark by @thinkall in #1058
- Agent notebook example with human feedback; Support shell command and multiple code blocks; Improve the system message for assistant agent; Improve utility functions for config lists; reuse docker image by @sonichi in #1056
New Contributors
- @badjouras made their first contribution in #1051
Full Changelog: v1.2.4...2.0.0rc1
v1.2.4
This release contains:
- improved support for using a list of configurations (thanks to @BeibinLi ),
- using a filter to select from the responses out of a sequence of configurations (doc).
- a new experimental human-proxy agent (thanks to @qingyun-wu and @gagb).
- utility function to create config lists.
- method clear_cache added in
oai.Completion
. - update of the default search space (thanks to @Kyoshiin and @LittleLittleCloud ).
- prepare for flaml v2 (thanks to @qingyun-wu for writing the blogpost).
Breaking change:
cache_path
is renamed intocache_path_root
in set_cache.
Thanks to code reviewers @skzhang1 @jtongxin @pcdeadeasy @ZviBaratz @LittleLittleCloud @Borda , and to @liususan091219 @thinkall for fixing test error.
What's Changed
- Catch AuthenticationError trying different configs by @BeibinLi in #1023
- chat completion check by @sonichi in #1024
- update model of text summarization in test by @liususan091219 in #1030
- Human agent by @qingyun-wu in #1025
- fix of website link by @sonichi in #1042
- Blogpost by @qingyun-wu in #1026
- Update default search space by @Kyoshiin in #1044
- Fix PULL_REQUEST_TEMPLATE and improve test by removing unnecessary environment variable by @thinkall in #1043
- response filter by @sonichi in #1039
New Contributors
Full Changelog: v1.2.3...v1.2.4
v1.2.3
This release contains a number of updates in autogen and automl.
- We added more utilities and documentation improvement to
flaml.oai
such as logging, templating and using multiple configs to make developing and experimentation with OpenAI models more convenient. (thanks to @afourney @victordibia @torronen @ekzhu ) - We added an experimental coding agent based on GPT-4. (thanks to @BeibinLi @qingyun-wu @skzhang1 )
- We added options to disable the default mlflow logging in automl. (thanks to @garar)
- We make better use of parallelism in auto-scale spark cluster for automl and tune. (thanks to @thinkall )
Thanks to @royninja @luckyklyist @Borda @qingyun-wu @thinkall @liususan091219 for other bug fixes, documentation improvement, and engineering improvement. Thanks to @victordibia @skzhang1 @kevin666aa @jtongxin @levscaut for code reviews.
What's Changed
- version update post release v1.2.2 by @sonichi in #1005
- fixing the typo #990 by @royninja in #994
- fixed sentence misplace #998 by @luckyklyist in #1010
- pyproject.toml & switch to Ruff by @Borda in #976
- update readme by @qingyun-wu in #1014
- raise content_filter error by @sonichi in #1018
- Fix catboost failure in mac-os python<3.9 by @thinkall in #1020
- coding agent; logging by @sonichi in #1011
- Add mlflow_logging param by @garar in #1015
- fix NLP zero division error by @liususan091219 in #1009
- update max_spark_parallelism to fit in auto-scale spark cluster by @thinkall in #1008
- update spark session in spark tests by @thinkall in #1006
- Mark experimental classes; doc; multi-config trial by @sonichi in #1021
New Contributors
- @luckyklyist made their first contribution in #1010
- @garar made their first contribution in #1015
Full Changelog: v1.2.2...v1.2.3
v1.2.2
What's Changed
- update nlp notebook by @liususan091219 in #940
- Blog post for LLM tuning by @sonichi in #986
- fix zerodivision by @liususan091219 in #1000
- extract code from text; solve_problem; request_timeout in config; improve code by @sonichi in #999
Full Changelog: v1.2.1...v1.2.2