Faster response to user
It is known that providing faster response to users greatly improve their experience, and many web engineers work hard on optimizing the time spent until user starts seeing the contents. The theoretically easiest way to optimize the speed (without reducing the size of the data being transmitted) is to transfer essential contents first, before transmitting other data such as images.
With the finalization of HTTP/2, such approach has become practical thanks to it's dependency-based prioritization features. However, not all web browsers (and web servers) optimally prioritize the requests. The sad fact is that some of them do not prioritize the requests at all which actually leads to worse performance that HTTP/1.1 in some cases.
Since the release of version 1.2.0, we have conducted benchmark tests that measure first-paint time (time spent until the web browser starts rendering the new webpage), and have added a tuning parameter that can be turned on to optimize the first-paint time of web browsers that do not leverage the dependency-based prioritization, while not disturbing those that implement sophisticated prioritization logic.
The chart below shows the first-paint time measured using a virtual network with 100ms latency (typical for 4G mobile networks), rendering a web page containing jquery, CSS and multiple image files.
It is evident that the prioritization logic implemented in H2O and the web browsers together offer a huge reduction in first-paint time. As the developer of H2O, we believe that the prioritization logic implemented in H2O to be the best of class (if not the best among all), not only implementing the specification correctly but also for having practical tweaks to optimize against the existing web browsers.
In other words, web-site administrators can provide better (or the best) user-experience to the users by switching their web server to H2O. For more information regarding the topic, please read HTTP/2 (and H2O) improves user experience over HTTP/1.1 or SPDY.
Version 1.3.0 also supports TCP fast open, an extension to TCP/IP that reduces the time required for establishing a new connection. The extension is already implemented in Linux (and Android), and is also expected to be included in iOS 9. As of H2O version 1.3.0 the feature is turned on by default to provide even quicker user experience. Kudos go to Tatsuhiko Kubo for implementing the feature.
FastCGI support
Since the initial release of H2O many users have asked for the feature; it is finally available! And we are also proud that it is easy to use.
First, it can be configured either at path-level or extension-level. The latter means that for example you can simply map
.php files to the FastCGI handler without writing regular expressions to extract PATH_INFO.The second is the ability to launch FastCGI process manager under the control of H2O. You do not need to spawn an external FastCGI server and maintain it separately.
Using these features, for example Wordpress can be set-up just by writing few lines of configuration.
paths:
"/":
# serve static files if found
file.dir: /path/to/doc-root
# if not found, internally redirect to /index.php/...
redirect:
url: /index.php/
internal: YES
status: 307
# handle PHP scripts using php-cgi (FastCGI mode)
file.custom-handler:
extensions: .php
fastcgi.spawn: "PHP_FCGI_CHILDREN=10 exec /usr/bin/php-cgi"
Of course it is possible to configure H2O to connect to FastCGI applications externally using TCP/IP or unix sockets.Support for range-requests
Support for range-requests (HTTP requests that request a portion of a file) is essential for serving audio/video files. Thanks to Justin Zhu it is now supported by H2O.
Conclusion
All in all, H2O has become a much better product in version 1.3 by improving end-user experience and by adding new features.
We plan to continue improving the product. Stay tuned!
No comments:
Post a Comment