Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add similarity search batch processing and tests #437

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

unicoder88
Copy link

@unicoder88 unicoder88 commented May 19, 2024

Hi! This pull request adds ability to pass multiple texts in single request, but also is compatible with single-text format.

Solves issues #431 and #430

Considerations taken:

  • max input items for Workers AI - https://developers.cloudflare.com/workers-ai/models/bge-base-en-v1.5/#api-schema

    Max input tokens: 512

    API Schema suggests "maxItems": 100
    Uncertain which is correct, but this is configurable as an environment variable MAX_INPUT, leaving 100 as a default

  • Query vectors doesn't seem to provide a batch support, so I opted to run all requests in parallel with Promise.all - all requests must succeed. An alternative is Promise.allSettled that can tolerate individual request errors, but then we'll have to come up with error format per item, which isn't worth it for now

Screenshot 2024-05-19 at 17 19 19 Screenshot 2024-05-19 at 17 18 22

In addition, basic high level integration tests were added.

@unicoder88 unicoder88 changed the title Add similarity search batch processing Add similarity search batch processing and tests May 19, 2024
@evgenydmitriev
Copy link
Contributor

@unicoder88, what's the impact of batching on performance/costs in duplicate-heavy use cases?

@unicoder88
Copy link
Author

unicoder88 commented May 20, 2024

@evgenydmitriev , I've added unique text processing and updated tests. Only unique texts are sent to workers, but results are assigned back to original inputs.
This way duplicate-heavy use cases should still consume inbound traffic, but not increase Cloudflare usage costs.

@evgenydmitriev
Copy link
Contributor

Users creating batches of identical messages is a good consideration, but my question wasn't about duplicates within a batch. A much bigger duplicate problem might show up across multiple requests/users. Where do you think the biggest costs are? How does your batching method affect those costs (if at all)?

@unicoder88
Copy link
Author

unicoder88 commented May 22, 2024

Thanks @evgenydmitriev . Looks like the most expensive part is Vectorize calls. I'm intending to add Cloudflare cache API over each individual vector.

Price given 1000 requests:

Workers AI

https://developers.cloudflare.com/workers-ai/platform/pricing/

$0.011 / 1,000 Regular Twitch Neurons (also known as Neurons).

https://ai.cloudflare.com/#pricing-calculator
Screenshot 2024-05-22 at 09 08 28

Price $0.011 / 1,000 Neurons
768 tokens * 1000 requests = 970 Neurons = $0,0011 when over free limit

Vectorize

https://developers.cloudflare.com/vectorize/platform/pricing/
Price $0.040 per million

Queried Vector Dimensions: The total number of vector dimensions queried. If you have 50,000 vectors with 768-dimensions in an index, and make 1000 queries against that index, your total queried vector dimensions would sum to ((50000 + 1000) * 768) = 39.168 million = $1,567 (UPD. meant price here is one and a half dollar per 1000 requests, so $1.567)

@evgenydmitriev
Copy link
Contributor

image

@unicoder88 your pricing math is off, and I personally think that manually implementing caching is almost always an overkill, but you are thinking in the right direction. There might be an easier solution, though.

@unicoder88
Copy link
Author

unicoder88 commented May 22, 2024

Actually, you're right @evgenydmitriev ! Obviously, a database is a better and more straightforward solution than distributed cache here, so now I'm planning to use a D1 to create a schema, then query and insert results there.

D1 pricing looks tasty:

Rows written | 100,000 / day | First 50 million / month included + $1.00 / million rows

In worst case of all new vectors and free writes exhausted, 1000 writes will cost $0.001.

@evgenydmitriev
Copy link
Contributor

I'm not sure D1 is necessarily cheaper than Cloudflare cache, but it might be, depending on the usage. My argument was about avoiding manual caching (whatever the data store you might want to choose). There's a much simpler solution that is already implemented for the worker, but will be broken with your batching approach.

@unicoder88
Copy link
Author

Hi @evgenydmitriev ! I have some updates to bring up to you.

Firstly, worker limits:

  • max 6 simultaneous outgoing fetch requests
  • 128 MB memory
  • Duration - no limit
  • CPU 30 s HTTP request

Possible bottleneck could be 6 simultaneous requests, while this batching method is attempting to execute up to 100 requests, the rest should be throttled.

However, after I've built the whole system with same batching approach, and results look better than expected. Execution looks like this:
Screenshot 2024-05-24 at 22 39 43

The setup:

Processing times, however, do not show considerable slowdown no matter concurrency or duplicates:

  • Batch search 36 random texts - AI 900 ms, Vectorize 2 s
  • 72 texts - AI 2 s, Vectorize 3 s
  • 100 texts - AI 1-4 s, Vectorize 4-5 s

Parallel execution:

  • 2 parallel executions x search 100 random texts - AI 2 s, vectorize 5 s
  • 10 parallel executions x search 100 random texts - AI 0.6-3 s, vectorize 6-7 s
  • 10 parallel executions x search 100 duplicate texts - AI 0.5-4, vectorize 6-7 s

CPU time also looks good:
image

I would remove any duplicates checking for simplicity, and any caching because 768 dimensional vectors look unlikely to match exactly between different texts. Also, I would improve input checks for non-empty text, because empty text in a batch breaks whole batch.

I would appreciate any further hints.

@evgenydmitriev
Copy link
Contributor

Thanks for the detailed test results. You correctly identified the Workers AI embedding models being the cost bottleneck in one of your previous comments, even accounting for the pricing calculation error. You also correctly identified the need for a caching mechanism there. The question is finding an easier and cheaper way of doing it than manually attaching a data store and querying every incoming message against it. There's an easier solution that's already implemented in the single message approach, but will be broken if we were to implement your batching mechanism as is.

@unicoder88
Copy link
Author

unicoder88 commented May 27, 2024

@evgenydmitriev, It would be nice to have URL like this - GET https://www.example.com/some-namespace?text=some%20text, and then let Cloudflare CDN cache those.
I've tried this, and it doesn't work because workers are run before cache.

What I tried:

image

  • attaching the worker to a Custom Domain
  • adding a bunch of Cache-Control headers to response
    c.res.headers.append("Cache-Control", "public, max-age=14400, s-maxage=84000, stale-if-error=60")
    c.res.headers.append("Cloudflare-CDN-Cache-Control", "max-age=24400")
    c.res.headers.append("CDN-Cache-Control", "max-age=18000")

@evgenydmitriev
Copy link
Contributor

@unicoder88, could you clarify what it is that you are trying to cache? HTTP requests containing message batches?

@unicoder88
Copy link
Author

@evgenydmitriev, well, latest suggestion is to start small and create an endpoint that works on a single message, but is cacheable by URL.
Then either have clients initiate multiple parallel requests on their own if necessary (Cloudflare will take care of scaling), or on top of it come up with some sort of gateway that takes a batch and dispatches to this worker. Latter is unclear yet.

@evgenydmitriev
Copy link
Contributor

evgenydmitriev commented May 27, 2024

well, latest suggestion is to start small and create an endpoint that works on a single message, but is cacheable by URL.

Isn't this what the current worker does? You do not need a custom domain to cache HTTP requests to Cloudflare Workers.

@unicoder88
Copy link
Author

Now it's a POST body, so shouldn't be cacheable by default. In this prototype, I converted to GET (with no luck admittedly 😀)

@unicoder88
Copy link
Author

Hi @evgenydmitriev , please review updates.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants