Releases: milvus-io/milvus
milvus-2.2.10
2.2.10
Release date: 14 June, 2023
Milvus version | Python SDK version | Java SDK version | Go SDK version | Node.js SDK version |
---|---|---|---|---|
2.2.10 | 2.2.12 | 2.2.6 | 2.2.4 | 2.2.17 |
We are excited to announce the release of Milvus 2.2.10! This update includes important bug fixes, specifically addressing occasional system crashes, ensuring a more stable experience. We have also made significant improvements to loading and indexing speeds, resulting in smoother operations. A significant optimization in this release is the reduction of memory usage in data nodes, made possible through the integration of the Go payload writer instead of the old CGO implementation. Furthermore, we have expanded our Role-Based Access Control (RBAC) capabilities, extending these protections to the database and 'Flush All' API. Enjoy the enhanced security and performance of Milvus 2.2.10!
New Features
- Added role-based access control (RBAC) for the new interface:
Bug Fixes
- Fixed random crash introduced by AWS S3 SDK:
- Fixed "show loaded collections" (#24628) (#24629)
- Fixed creating a collection not being idempotent (#24721) (#24722)
- Fixed DB name being empty in the "describe collection" response (#24603)
- Fixed deleted data still being visible (#24796)
Enhancements
- Replaced GCO payload writer with Go payload writer to reduce memory usage (#24656)
- Enabled max result window limit (#24768)
- Removed unused iterator initialization (#24758)
- Enabled metric type checks before search (#24652) (#24716)
- Used go-api/v2 for milvus-proto (#24723)
- Optimized the penalty mechanism for exceeding rate limits (#24624)
- Allowed default params in HNSW & DISKANN (#24807)
- Security -
- [2.2] Bumped github.com/gin-gonic/gin from 1.9.0 to 1.9.1 (#24830)
Performance
- Fixed build index performance downgrade (#24651)
milvus-2.2.9
2.2.9
Release date: 1 June, 2023
Milvus version | Python SDK version | Java SDK version | Go SDK version | Node.js SDK version |
---|---|---|---|---|
2.2.9 | 2.2.9 | 2.2.5 | 2.2.3 | 2.2.11 |
Milvus 2.2.9 has added JSON support, allowing for more flexible schemas within collections through dynamic schemas. The search efficiency has been improved through partition keys, which enable data separation for different data categories, such as multiple users, in a single collection. Additionally, database support has been integrated into Role-Based Access Control (RBAC), further fortifying multi-tenancy management and security. Support has also been extended to Alibaba Cloud OSS, and connection management has been refined, resulting in an improved user experience.
As always, this release includes bug fixes, enhancements, and performance improvements. Notably, disk usage has been significantly reduced, and performance has been improved, particularly for filtered searches.
We hope you enjoy the latest release!
Breaking changes
For user's deploy with minio standalone, please be aware of that Milvus standalone data is not compatible due to https://min.io/docs/minio/linux/operations/install-deploy-manage/migrate-fs-gateway.html. You have to manualy migrate the data to new Minio instance before upgrade.
New Features
-
JSON support
-
Dynamic schema
-
Partition key
-
Database support in RBAC
-
Connection management
-
Alibaba Cloud OSS support
-
Additional features
Bug fixes
- Added temporary disk data cleaning upon the start of Milvus (#24400).
- Fixed crash issue of bulk insert caused by an invalid Numpy array file (#24480).
- Fixed an empty result set type for Int8~Int32 (#23851).
- Fixed the panic that occurs while balancing releasing a collection (#24003) (#24070).
- Fixed an error that occurs when a role removes a user that has already been deleted (#24049).
- Fixed an issue where session stop/goingStop becomes stuck after a lost connection (#23771).
- Fixed the panic caused by incorrect logic of getting unindexed segments (#24061).
- Fixed the panic that occurs when a collection does not exist in quota effect (#24321).
- Fixed an issue where refresh may be notified as finished early (#24438) (#24466).
Enhancement
-
Added an error response to return when an unimplemented request is received (#24546)
-
Reduced disk usage for Milvus Lite and Standalone:
-
Optimized quota to avoid OOM on search
-
Added consistency_level in search/query request (#24541)
-
(pr24562) Supported search with default parameters (#24516)
-
Put DataNode load statslog lazy if SkipBFStatsLog is true (#23779)
-
Put QueryNode lazy load statslog if SkipBFLoad is true (#23904)
-
Fixed concurrent map read/write in rate limiter (#23957)
-
Improved load/release performance:
-
Optimized PrivilegeAll permission check (#23972)
-
Fixed the "not shard leader" error when gracefully stopping (#24038)
-
Lowered the task merge cap to mitigate an insufficient memory error (#24233)
-
Removed constraint that prevents creating an index after load (#24415)
Performance improvements
milvus-2.2.8
v2.2.8
Release date: 2 May, 2023
Milvus version | Python SDK version | Java SDK version | Go SDK version | Node.js SDK version |
---|---|---|---|---|
2.2.7 | 2.2.8 | 2.2.5 | 2.2.2 | 2.2.7 |
In this update, we fixed 1 critical bug.
Bugfix
- Fix rootcoord panic when upgrading from v2.2.x to v2.2.7 #23828
milvus-2.2.7
v2.2.7
Release date: 28 April, 2023
Milvus version | Python SDK version | Java SDK version | Go SDK version | Node.js SDK version |
---|---|---|---|---|
2.2.7 | 2.2.8 | 2.2.5 | 2.2.2 | 2.2.7 |
In this update, we have focused on resolving various issues reported by our users, enhancing the software's overall stability and functionality. Additionally, we have implemented several optimizations, such as load balancing, search grouping, and memory usage improvements.
Bugfix
- Fixed a panic caused by not removing metadata of a dropped segment from the DataNode. (#23492)
- Fixed a bug that caused forever blocking due to the release of a non-loaded partition. (#23612)
- To prevent the query service from becoming unavailable, automatic balancing at the channel level has been disabled as a workaround. (#23632) (#23724)
- Cancel failed tasks in the scheduling queue promptly to prevent an increase in QueryCoord scheduling latency. (#23649)
- Fixed compatibility bug and recalculate segment rows to prevent service queries from being unavailable. (#23696)
- Fixed a bug in the superuser password validation logic. (#23729)
- Fixed the issue of shard detector rewatch failure, which was caused by returning a closed channel. (#23734)
- Fixed a loading failure caused by unhandled interrupts in the AWS SDK. (#23736)
- Fixed the "HasCollection" check in DataCoord. (#23709)
- Fixed the bug that assigned all available nodes to a single replica incorrectly. (#23626)
Enhancement
- Optimized the display of RootCoord histogram metrics. (#23567)
- Reduced peak memory consumption during collection loading. (#23138)
- Removed unnecessary handoff event-related metadata. (#23565)
- Added a plugin logic to QueryNode to support the dynamic loading of shared library files. (#23599)
- Supports load balancing with replica granularity. (#23629)
- Released a load-balancing strategy based on scores. (#23805)
- Added a coroutine pool to limit the concurrency of cgo calls triggered by "delete". (#23680)
- Improved the compaction algorithm to make the distribution of segment sizes tend towards the ideal value. (#23692)
- Changed the default shard number to 1. (#23593)
- Improved search grouping algorithm to enhance throughput. (#23721)
- Code refactoring: Separated the read, build, and load DiskANN parameters. (#23722)
- Updated etcd and Minio versions. (#23765)
milvus-2.2.6
v2.2.6
Release date: 18 April, 2023
Milvus version | Python SDK version | Java SDK version | Go SDK version | Node.js SDK version |
---|---|---|---|---|
2.2.6 | 2.2.7 | 2.2.5 | 2.2.1 | 2.2.4 |
Upgrade to Milvus 2.2.6 as soon as possible!
You are advised to refrain from using version 2.2.5 due to several critical issues that require immediate attention. Version 2.2.6 addresses these issues. One of the critical issues is the inability to recycle dirty binlog data. We highly recommend using version 2.2.6 version instead of version 2.2.5 to avoid any potential complications.
If you hit the issue where data on object storage cannot be recycled, upgrade your Milvus to v2.2.6 to fix these issues.
Bugfix
- Fixed the problem of DataCoord GC failure (#23298)
- Fixed the problem that index parameters passed when creating a collection will override those passed in subsequent create_index operations (#23242)
- Fix the problem that the message backlog occurs in RootCoord, which causes the delay of the whole system to increase (#23267)
- Fixed the accuracy of metric RootCoordInsertChannelTimeTick (#23284)
- Fixed the issue that the timestamp reported by the proxy may stop in some cases (#23291)
- Fixed the problem that the coordinator role may self-destruct by mistake during the restart process (#23344)
- Fixed the problem that the checkpoint is left behind due to the abnormal exit of the garbage collection goroutine caused by the etcd restart (#23401)
Enhancement
- Added slow logging performance for query/search when the latency is not less than 5 seconds (#23274)
milvus-2.2.5
2.2.5
Release date: 29 March, 2023
Milvus version | Python SDK version | Java SDK version | Go SDK version | Node.js SDK version |
---|---|---|---|---|
2.2.5 | 2.2.4 | 2.2.4 | 2.2.1 | 2.2.4 |
Security
Fixed MinIO CVE-2023-28432 by upgrading MinIO to RELEASE.2023-03-20T20-16-18Z。
New Features
-
First/Random replica selection policy
This policy allows for a random replica selection if the first replica chosen under the round-robin selection policy fails. This improves the throughput of database operations.
Bug fixes
-
Fixed index data loss during the upgrade from Milvus 2.2.0 to 2.2.3.
-
Fixed DataCoord Out-of-Memory (OOM) with large fresh pressure.
-
Fixed a concurrency issue in the LRU cache that was caused by concurrent queries with specified output fields.
- Fixed an issue to use single-flight to limit the readWithCache concurrent operation (#23037)
- Fixed LRU cache concurrency (#23041)
- Fixed query performance issue with a large number of segments (#23028)
- Fixed shard leader cache
- Fixed GetShardLeader returns old leader (#22887) (#22903)
- Fixed an issue to deprecate the shard cache immediately if a query failed (#22848)
- Fixed an issue to enable batch delete files on GCP of MinIO (#23052) (#23083)
- Fixed flush delta buffer if SegmentID equals 0 (#23064)
- fixed unassigned from resource group (#22800)
- Fixed load partition timeout logic still using createdAt (#23022)
- Fixed unsub channel always removes QueryShard (#22961)
Enhancements
- Added memory Protection by using the buffer size in memory synchronization policy (#22797)
- Added dimension checks upon inserted records (#22819) (#22826)
- Added configuration item to disable BF load (#22998)
- Aligned the maximum dimensions of the DisANN index and that of a collection (#23027)
- Added checks whether all columns aligned with same num_rows (#22968) (#22981)
- Upgraded Knowhere to 1.3.11 (#22975)
- Added the user RPC counter (#22870)
milvus-2.3.0 beta
2.3.0 beta
Release date: 20 March, 2023
Milvus version | Python SDK version | Java SDK version | Go SDK version | Node.js SDK version |
---|---|---|---|---|
2.3.0 beta | 2.2.3b1 | N/A | N/A | N/A |
The latest release of Milvus introduced a new feature that will please many users: Nvidia GPU support. This new feature brings the ability to support heterogeneous computing, which can significantly accelerate specialized workloads. With GPU support, users can expect faster and more efficient vector data searches, ultimately improving productivity and performance.
Features
GPU support
Milvus now supports two GPU-based IVF indexes: RAFT and FAISS. According to a benchmark on RAFT's GPU-based IVF-series indexes, GPU indexing achieves a 10x increase in search performance on large NQ cases.
-
Benchmark
We have compared RAFT-IVF-Flat with IVF-Flat and HNSW at a recall rate of 95%, and obtained the following results.
Datasets SIFT GIST GLOVE Deep HNSW (VPS) 14,537 791 1516 5761 IVF-Flat (VPS) 3097 142 791 723 RAFT-IVF-Flat (VPS) 121,568 5737 20,163 16,557 Also we benchmarked RAFT-IVF-PQ comparing Knowhere's fastest index HNSW at 80% recall.
Datasets SIFT GIST GLOVE Deep HNSW(VPS) 20,809 2593 8005 13,291 RAFT-IVF-PQ(VPS) 271,885 7448 38,989 80,363 These benchmarks run against Knowhere on a host with an 8-core CPU, 32 GB of RAM, and an Nvidia A100 GPU with an NQ of 100.
For details on these benchmarks, refer to the release notes of Knowhere v2.1.0.
Special thanks go to @wphicks and @cjnolet from Nvidia for their contributions to the RAFT code.
Memory-mapped (mmap) file I/O
In scenarios where there is not sufficient memory for large datasets and it is insensitive to query performance, Milvus uses mmap to allow the system to treat parts of a file as if they were in memory. This can reduce memory usage and improve performance if all data is held in the system page cache.
Range search
The range search method returns all vectors within a certain radius around the query point, as opposed to the k-nearest ones. Range search is a valuable tool for querying vectors within a specific distance, for use cases such as anomaly detection and object distinction.
Upsert
Milvus now supports record upsert, similar to that in a relational database. This operation atomically deletes the original entity with the primary key (PK) and inserts a new entity. Note that upserts can only be applied to a given primary key.
Change Data Capture(CDC)
Change Data Capture is a process that identifies and tracks changes to data in a database. Milvus CDC provides real-time subscriptions to data and database events as they occur.
In addition to the aforementioned features, later release 2.3 of Milvus will also introduce new features such as accurate count support, Feder visualization support and growing segment indexing.
Milvus later will offer Dynamic Partitioning, which allows users to conveniently create and load a partition without releasing the collection. In addition, Milvus 2.3.0 will improve memory management, performance, and manageability under multi-partition cases.
Now, you can download Milvus and get started.
milvus-2.2.4
2.2.4
Release date: 17 March, 2023
Milvus version | Python SDK version | Java SDK version | Go SDK version | Node.js SDK version |
---|---|---|---|---|
2.2.4 | 2.2.3 | 2.2.3 | 2.2.1 | 2.2.4 |
Milvus 2.2.4 is a minor update to Milvus 2.2.0. It introduces new features, such as namespace-based resource grouping, collection-level physical isolation, and collection renaming.
In addition to these features, Milvus 2.2.4 also addresses several issues related to rolling upgrades, failure recovery, and load balancing. These bug fixes contribute to a more stable and reliable system.
We have also made several enhancements to make your Milvus cluster faster and consume less memory with reduced convergence time for failure recovery.
New Features
-
Resource grouping
Milvus has implemented resource grouping for QueryNodes. A resource group is a collection of QueryNodes. Milvus supports grouping QueryNodes in the cluster into different resource groups, where access to physical resources in different resource groups is completely isolated. See Manage Resource Group for more information.
-
Collection renaming
The Collection-renaming API provides a way for users to change the name of a collection. Currently, PyMilvus supports this API, and SDKs for other programming languages are on the way. See Rename a Collection for details.
-
Google Cloud Storage support
Milvus now supports Google Cloud Storage as the object storage.
-
New option to the search and query APIs
If you are more concerned with performance rather than data freshness, enabling this option will skip search on all growing segments and offer better search performance under the scenario search with insertion. See search(/api-reference/pymilvus/v2.2.3/Collection/search().md) and query() for details.
Bugfix
- Fixed segment not found when forwarding delete to empty segment #22528#22551
- Fixed possible broken channel checkpoint in v2.2.2 #22205 #22227
- Fixed entity number mismatch with some entities inserted #22306
- Fixed DiskANN recovery failure after QueryNode reboots #22488 #22514
- Fixed search/release on same segment #22414
- Fixed file system crash during bulk-loading files prefixed with a '.' #22215
- Added tickle for DataCoord watch event #21193 #22209
- Fixed deadlock when releasing segments and removing nodes concurrently #22584
- Added channel balancer on DataCoord #22324 #22377
- Fixed balance generated reduce task #22236 #22326
- Fixed QueryCoord panic caused by balancing #22486
- Added scripts for rolling update Milvus's component installed with helm #22124
- Added
NotFoundTSafer
andNoReplicaAvailable
to retriable error code #22505 - Fixed no retires upon gRPC error #22529
- Fixed an issue for automatic component state update to healthy after start #22084
- Added graceful-stop for sessions #22386
- Added retry op for all servers #22274
- Fixed metrics info panic when network error happens #22802
- Fixed disordered minimum timestamp in proxy's pchan statistics #22756
- Fixed an issue to ensure segment ID recovery upon failures to send time-tick #22771
- Added segment info retrieval without the binlog path #22741
- Added distribution.Peek for GetDataDistribution in case of blocked by release #22752
- Fixed the segment not found error #22739
- Reset delta position to vchannel in packSegmentLoadReq #22721
- Added vector float data verification for bulkinsert and insert #22729
- Upgraded Knowhere to 1.3.10 to fix bugs #22746
- Fixed RootCoord double updates TSO #22715 #22723
- Fixed confused time-tick logs #22733 #22734
- Fixed session nil point #22696
- Upgraded Knowhere to 1.3.10 #22614
- Fixed incorrect sequence of timetick statistics on proxy#21855 #22560
- Enabled DataCoord to handle GetIndexedSegment error from IndexCoord #22673
- Fixed an issue for Milvus writes flushed segment key only after the segment is flushed #22667
- Marked cache deprecated instead of removing it #22675
- Updated shard leader cache #22632
- Fixed an issue for the replica observer to assign node #22635
- Fixed the not found issue when retrieving collection creation timestamp #22629 #22634
- Fixed time-tick running backwards during DDLs #22617 #22618
- Fixed max collection name case #22601
- Fixed DataNode tickle not run default #22622-
- Fixed DataCoord panic while reading timestamp of an empty segment #22598
- Added scripts to get etcd info #22589
- Fixed concurrent loading timeout during DiskANN indexing #22548
- Fixed an issue to ensure index file not finish early because of compaction #22509
- Added
MultiQueryNodes
tag for resource group #22527 #22544
Enhancement
-
Performance
- Improved query performance by avoiding counting all bits #21909 #22285
- Fixed dual copy of varchar fields while loading #22114 #22291
- Updated DataCoord compaction panic after DataNode update plan to ensure consistency #22143 #22329
- Improved search performance by avoiding allocating a zero-byte vector during searches #22219 #22357
- Upgraded Knowhere to 1.3.9 to accelerate IVF/BF #22368
- Improved search task merge policy #22006 #22287
- Refined Read method of MinioChunkManager to reduce IO#22257
-
Memory Usage
-
Others
milvus-2.2.3
2.2.3
Release date: 10 Feb, 2023
Milvus version | Python SDK version | Java SDK version | Go SDK version | Node.js SDK version |
---|---|---|---|---|
2.2.3 | 2.2.2 | 2.2.3 | coming soon | 2.2.1 |
Milvus 2.2.3 introduces the rolling upgrade capability to Milvus clusters and brings high availability settings to RootCoords. The former greatly reduces the impacts brought by the upgrade and restart of the Milvus cluster in production to the minimum, while the latter enables coordinators to work in active-standby mode and ensures a short failure recovery time of no more than 30 seconds.
In this release, Milvus also ships with a lot of improvements and enhancements in performance, including a fast bulk-insert experience with reduced memory usage and less loading time.
Breaking changes
In 2.2.3, the maximum number of fields in a collection is reduced from 256 to 64. (#22030)
Features
-
Rolling upgrade using Helm
The rolling upgrade feature allows Milvus to respond to incoming requests during the upgrade, which is not possible in previous releases. In such releases, upgrading a Milvus instance requires it to be stopped first and then restarted after the upgrade is complete, leaving all incoming requests unanswered.
Currently, this feature applies only to Milvus instances installed using Milvus Helm charts.
Related issues:
-
Coordinator HA
Coordinator HA allows Milvus coordinators to work in active-standby mode to avoid single-point of failures.
Related issues:
- HA-related issues identified and fixed in QueryCoordV2 (#21501)
- Auto-registration on the startup of Milvus was implemented to prevent both coordinators from working as the active coordinators. (#21641)
- HA-related issues identified and fixed in RootCoords (#21700)
- Issues identified and fixed in active-standby switchover (#21747)
Enhancements
-
Bulk-insert performance enhanced
- Bulk-insert enhancement implemented (#20986 #21532)
- JSON parser optimized for data import (#21332)
- Stream-reading NumPy data implemented (#21540)
- Bulk-insert progress report implemented (#21612)
- Issues identified and fixed so that Milvus checks indexes before flushes segments before bulk-insert is complete (#21604)
- Issues related to bulk-insert progress identified and fixed (#21668)
- Issues related to bulk-insert report identified and fixed (#21758)
- Issues identified and fixed so that Milvus does not seal failed segments while performing bulk-insert operations. (#21779)
- Issues identified and fixed so that bulk-insert operations do not cause a slow flush (#21918)
- Issues identified and fixed so that bulk-insert operations do not crash the DataNodes (#22040)
- Refresh option added to LoadCollection and LoadPartition APIs (#21811)
- Segment ID update on data import implemented (#21583)
-
Memory usage reduced
-
Monitoring metrics optimized
-
Meta storage performance improved
- Improved ListSegments performance for Datacoord catalog. (#21600)
- Improved LoadWithPrefix performance for SuffixSnapshot. (#21601)
- Removed redundant LoadPrefix requests for Catalog ListCollections. (#21551) (#21594)
- Added A WalkWithPrefix API for MetaKv interface. (#21585)
- Added GC for snapshot KV based on time-travel. (#21417) (#21763)
-
Performance improved
- Upgraded Knowhere to 1.3.7. (#21735)
- Upgraded Knowhere to 1.3.8. (#22024)
- Skipped search GRPC call for standalone. (#21630)
- Optimized some low-efficient code. (#20529) (#21683)
- Fixed fill the string field twice when string index exists. (#21852) (#21865)
- Used all() API for bitset check. (#20462) (#21682)
-
Others
- Implemented the GetLoadState API. (#21533)
- Added a task to unsubscribe dmchannel. (#21513) (#21794)
- Explicitly list the triggering reasons when Milvus denies reading/writing. (#21553)
- Verified and adjusted the number of rows in a segment before saving and passing SegmentInfo. (#21200)
- Added a segment seal policy by the number of binlog files. (#21941)
- Upgraded etcd to 3.5.5. (#22007)
Bug Fixes
-
QueryCoord segment replacement fixed
- Fixed the mismatch of sealed segments IDs after enabling load-balancing in 2.2. (#21322)
- Fixed the sync logic of the leader observer. (#20478) (#21315)
- Fixed the issues that observers may update the current target to an unfinished next target. (#21107) (#21280)
- Fixed the load timeout after the next target updates. (#21759) (#21770)
- Fixed the issue that the current target may be updated to an invalid target. (#21742) (#21762)
- Fixed the issue that a failed node may update the current target to an unavailable target. (#21743)
-
Improperly invalidated proxy cache fixed
-
CheckPoint and GC Related issues fixed
-
Issues...
milvus-2.2.2
2.2.2
Release date: 22 December, 2022
Milvus version | Python SDK version | Java SDK version | Go SDK version | Node.js SDK version |
---|---|---|---|---|
2.2.2 | 2.2.1 | 2.2.1 | 2.2.0 | 2.2.1 |
Milvus 2.2.2 is a minor fix of Milvus 2.2.1. It fixed a few loading failure issues as of the upgrade to 2.2.1 and the issue that the proxy cache is not cleaned upon some types of errors.