One needs to send 1 mln HTTP requests concurrently, in batches, and read the responses. No more than 100 requests at a time.

Which way will it be better, recommended, idiomatic?

  • Send 100 ones, wait for them to finish, send another 100, wait for them to finish… and so on

  • Send 100 ones. As a a request among the 100 finishes, add a new one into the pool. “Done - add a new one. Done - add a new one”. As a stream.

  • deegeese@sopuli.xyz
    link
    fedilink
    arrow-up
    22
    ·
    11 months ago

    That’s not 1M concurrent requests.

    That’s 100 concurrent requests for a queue of 1M tasks.

    Work queue and thread pool is the normal way, but it’s possible to get fancy with optimizations.

    Basically you fire 100 requests and when one completes you immediately fire another.

  • dark_stang@beehaw.org
    link
    fedilink
    English
    arrow-up
    18
    ·
    11 months ago

    Not enough info. What are you trying to actually accomplish here? If you’re stress testing and trying to measure how fast a server can process all those requests, use something like jmeter. You can tell it to do 100 concurrent threads with 10000 requests each, then call it a day.

    • cuenca@lemm.eeOP
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      Not enough info. What are you trying to actually accomplish here by asking me this question?

    • douglasg14b@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      11 months ago

      For most users jmeter is difficult to approach.

      Something like autocannon or ddosify may be nicer

  • Borger@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    11 months ago

    The second option. With the first option you’ll end up in situations where you have spare compute/network resource that isn’t being utilised because all the remaining ones in the current batch of 100 are being handled by other threads / worker processes.

      • catacomb@beehaw.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 months ago

        Where did you get 100 from? I’m just asking if it’s a real limit or a guess at “some manageable number” under one million.

        It can be worth experimenting and tuning this value. You might even find that less than 100 works better.

  • Gamma@beehaw.org
    link
    fedilink
    English
    arrow-up
    5
    ·
    11 months ago

    Careful everyone, they didn’t specify programming languages! Don’t even THINK of providing a few lines of Python that would answer the question 🐍

  • Hirom@beehaw.org
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    11 months ago

    Rewrite the application to be less greedy in the number of requests it submit to the server, make (better) use of caching. That’ll probably lower the number of concurrent request that have to be handled.