Fly is a smart, decentralized hosting that allows your application to run faster by assigning virtual machines in the data centers that are located close to your users. When users connect to your global IPs, Fly dynamically assigns compute powers in datacenters closest to that particular user's location, contrary to solutions globally duplicating resources in every datacenter near where your customers are. So basically Fly is a solution allowing you to run Docker images on servers in many different cities and a global router connecting users to the nearest available instance.
Location-Smart - when a user connects to a Fly application, the system determines the nearest location for the lowest latency and starts the application there.
Auto-scaling - with Fly new instances are being created as more connections appear at a specific location,
Agile - your applications adjust to users' demand, relocating power to the locations where demand is expected to be higher,
Creators of Fly provided an example presenting the performance benefits coming from using their service combined with a global Redis cache and Apollo Server. These three technologies power a GraphQL server able to perform queries super efficiently.
When a query arrives at the Apollo Server, it is being filtered through the set of defined GraphQL types and being matched with corresponding resolvers.
When a REST query is executed, the server will respond with a JSON containing the requested data and cache the results (if it was configured to do so). Next incoming queries will be checked if the server already has cached results ready to be served.
The app creates the RedisCache (either locally or using Fly's global Redis service), environment variables with a connection string to the server. The last step is the creation of an Apollo Server using provided configurations so it could start listening for incoming requests.
Fly offers significant performance improvement over traditional hosting. Here are some sample results of the response times test performed for this example app. As you can see results when being hosted on Fly looks much better than results of requests made to the Open Library API directly:
|Request Method||Test 1||Test 2||Test 3|
|Open Library API||2.06s||1.70s||1.24s|
The above results were possible thanks to requests being served from the node located closest to the user's querying for data, as well as using Redis to cache responses. After the first request made to a specific location Redis will keep data live for approximately an hour, so the next matching requests being sent to this location could be served significantly faster.
If you are still skeptical just visit the example app page on GitHub. There is a detailed instruction provided on how to deploy it using Fly so you could test it yourself.