eBay’s Font Loading Strategy

The usage of custom fonts in web pages have steadily increased in recent years. As of this writing, 68% of sites in the HTTP Archive use at least one custom font. At eBay, we have been discussing custom web fonts for typography for quite some time, but never really pursued it. The main reason was due to uncertainty in end user experiences from a performance standpoint. But this changed recently.

Our design team made a strong case for a custom font to complement our new branding and, after multiple reviews, we all agreed it makes sense. Now it was on the engineering team to come up with an optimized implementation that not only uses the new custom font, but also tackles the performance overhead. This post gives a quick overview of the strategy we use at eBay to load custom web fonts.

Meet “Market Sans”

Our new custom font is rightly named “Market Sans” to denote affiliation with an online marketplace. As observed in the below image, it adds a subtle difference to typography, but in its entirety makes the whole page elegant and creates a unique eBay branded experience. Check out our desktop homepage, where “Market Sans” has been deployed.

Custom Font vs. System Font


It is well known and documented that custom web fonts come with a cost, and that is performance. They often delay rendering of text (critical to any web page) until the font is downloaded. A recent post from Akamai gives an excellent overview of the problems associated with custom fonts. To summarize there are two major issues, and it varies among browsers:

  • FOUT — Flash of Unstyled Text
  • FOIT — Flash of Invisible Text

As expected, the design and product teams were not happy with the compromise. Yes, custom fonts create a unique branded experience, but not at the cost of delaying the same experience. Additionally, from an e-commerce perspective, custom fonts are a good enhancement and not an absolute necessity. System fonts can still provide a compelling typography. So it was up to the engineering team to come up with an efficient font loading strategy with minimal tradeoffs.


Our strategy was pretty simple — avoid FOUT and FOIT: Use the custom font if it is already available (meaning downloaded and cached), else use default system fonts.

Fortunately, there is a CSS Font-Rendering proposal that adds a new @font-display descriptor named font-display. Using font-display, developers can specify how a font is displayed, based on whether and when it’s downloaded and ready to use. There are many values for font-display (checkout this quick video to understand them), and the one that maps to our strategy would be ‘font-display: optional’.

Unfortunately, the adoption of font-display among browsers is not widespread, as it is relatively new. So for now, until the adoption becomes mainstream, we came up with a solution that leverages the localStorage, FontFaceSet APIs and the Font Face Observer utility (as a backup if the FontFaceSet API is not present).

The below illustration gives an overview of how it works:

Flow diagram for Font Loader

To summarize,

  • When users visit an eBay web page, we add a tiny inline CSS and JavaScript snippet in the response HTML <head> tag. We also include a small JavaScript snippet in the footer HTML that incorporates the font loader logic.
  • The JavaScript in the <head> checks the localStorage if a font flag is set. If the flag is set, it immediately adds the CSS class to document root to enable the custom font. The page renders with the “Market Sans” custom font. This is the happy path.
  • The JavaScript in the footer again checks the localStorage for a font flag. If it is NOT set, it calls the font loader function on the document load event.
  • The font loader function loads (downloads) the custom fonts either using the built-in FontFaceSet API (if present) or through the Font Face Observer utility. The Font Face Observer is asynchronously downloaded on demand.
  • Once the font download is complete, a font flag is set on the localStorage. One thing to note — even though the font flag is set, we do not update the current view with the custom font. It is done on the next page visit with Step 2 above kicking in.

We have open sourced this module as ebay-font. It is a small utility that works along with other eBay open source modules Skin, Marko, and Lasso, as well as in standalone mode. We hope others can benefit from it.


There are a couple of tradeoffs with this strategy:

  1. First time users: A new user visiting eBay for the first time will get the system font. On navigation or subsequent visits they will get our custom font. This is acceptable, as we have to start the custom font at some point, and we will start it from the second visit of a new user.
  2. Private or incognito mode: When a user starts a new browsing session in private or incognito mode, they get the system font initially. But subsequent browsing in the same session will render the custom font (Safari is an exception, but it is getting fixed). We do not have metrics on how many users fall under this category, but this is something we have to live with.
  3. Cache eviction: In certain rare scenarios we observed that the custom font entity in the browser cache is evicted, but the localStorage entry is still present. Probably browsers clean up cache more frequently than localStorage. In these scenarios, users will experience a FOIT or FOUT based on the browser. This is more of an edge case and hence less concerning.

As a team we agreed that these tradeoffs are acceptable, considering the unpredictable behavior that comes with default font loading.


Custom web fonts do add value to the overall user experience, but it should not be at the cost of delaying the critical content. Each organization should have a font loading strategy based on their application needs. The new built-in CSS property ‘font-display’ makes it very easy to choose one. We should start using it right away, even if the support is minimal and even if there is already an in-house implementation.

Huge thanks to my colleague Raja Ramu for partnering on this effort and help in open sourcing the module ebay-font.

—  Senthil Padmanabhan

Dissect Helps Engineers Visualize and Debug Distributed Applications

In a natural evolution from a services architecture, we at eBay have adopted microservices to help us drive faster and more productive development cycles. eBay’s sophisticated application log infrastructure helped us for the most part, but with the use of microservices, we need to build the next-level infrastructure to visualize and debug microservices and to achieve these goals:

  • Make debugging more efficient by reducing time to troubleshoot and debug an API without understanding its upstreams and downstreams.
  • Provide clarity and transparency for API developers to understand the caller and callee.
  • Treat instrumentation as a primary data point. Logs are great, but technically, instrumentation is a special kind of log.
  • Last but not least, the ability to query and visualize the service chain in near real time.

To help achieve these goals, we built Dissect. Dissect is an eBay distributed tracing solution, based on Google’s Dapper, that helps engineers visualize and debug distributed Java and Node.js applications. It identifies how to instrument services and which services to instrument.

How Dissect Works



Dissect Instrumentation library intercepts the incoming request and instruments the Request.


Dissect records the Instrumented Span Data based on Sampling.


Dissect provides a simple waterfall view to visualize the Traces.


The recorded Traces can be analyzed for understanding release-over-release performance enhancements, etc.


Dissect provides the following benefits:

  • Bottleneck Detection. Dissect helps us understand the depth of calls. On most occasions, one single microservice does not cause a bottleneck, and multiple issues can be either up in the chain or downstream calls.
  • Developer Productivity. Dissect helps API developers produce a complete call graph and increases the room for optimization of the API.
  • Highly Scaleable. Dissect supports eBay’s large volumes of transactions and scales up to billions of requests for eBay’s use cases.
  • Polyglot. Dissect includes SDKs for Java and Node.js libraries.

Concepts & Terminologies

The root of the transaction generates CorrelationId and every request generates RequestId.
This figure illustrates the sample Request Flow of a distributed service chaining based on the user’s activity with eBay.

  • CorrelationId – A unique 64-bit identifier that identifies the transaction and propagates across the whole service life chain
  • RequestId – A unique 64-bit identifier generated and propagated for every Request part.
  • Response – Standard HTTP Response with Status, Duration.


Dissect borrows the ideas and inspiration from OpenTracing and follows the terminologies:

  • A trace is a unit of work performed in a server-bound transaction. An HTTP Request is a trace.
  • A span is a unit of work performed in the context of a Trace. Executing an HTTP Client Call is a span. Another example is an outbound HTTP client request in the context of a trace.

Data Construct

Trace and span are defined by this data construct:

Name Type Description
Name String Name of the Trace transaction. (Example: Http Request Path)
ID UUID Unique ID created or generated. (Example: 64bit UUID)
TraceId UUID CorrelationId ID propagates across the call chain. Originated when the transaction started. (Example: 64bit UUID)
ParentId UUID ID of the Transaction
Kind String Server (or) Client.
Status String Status of the Trace. (Example: 200 Ok)
StartTime Number Starting time of the Trace. (Example: Start Time in milli-seconds.)
Duration Number Time taken to complete the Span transaction.
edgeAttribute Object
edgeAttribute.Method String Type of the Transaction. (Example: HTTP Request Method.)
edgeAttribute.Caller String Caller (or) Callee originated the Transaction. (Example: HTTP Request Connection Address)
edgeAttribute.Attributes Map<String, String> Custom Key, Value Map of string values. (Example: POOL, Unique LOG ID that served the request.)
Example Dataset

The following example demonstrates how the TraceId, ParentId, and Id are propagated across different systems.

id traceId PARENTID Kind
1 1 server Server
2 1 1 Client
2 1 1 Server
3 1 2 Client
4 1 2 Client
3 1 2 Server
4 1 2 Server

The following figure illustrates the Dissect path. The figure identifies various distributed Service Nodes. Circle nodes are Server Nodes, square nodes are identified as Clients.


Custom Data Attributes

Along with the standard values described above, Dissect captures custom attributes for quicker debuggability. These custom attributes provide enough information to quickly identify the source of the transaction.  The following table lists the custom key information captured.

Name Description
POOL eBay cloud deployment follows the concept of Pool and Instances. A pool is a cluster of instances serving an application across different regions.
Unique LOG ID Log ID represents the single unit of work. This Log ID is the primary index to identify a request.

Even though these examples are tailored for HTTP protocols, the implementation is generic enough to adapt to RPCs. After all it is a simple Java and Node.js API.


Dissect, by default, allows several modes of sampling strategies:

  1. Sampling is implemented to collect Sampled Traces across the applications. Also down-stream applications can set up higher sampling rates to collect Traces based on conditions like Failures, Status Codes, and Latency in ms.
  2. Dissect also allows a sliding window to dissect the requests for every X minutes in X hours. This allows applications to automatically dissect the requests. This approach helps in getting a good sampling rather than a % of sampling.
  3. Finally, you can use a custom sampling strategy. At eBay we have developed extensible A/B strategies. Applications using an A/B strategy can leverage the same as a sampling strategy for Dissect, also.

Note: The sampling and experimentation platforms have been a part of the eBay platform for some years, and applications can sample based on any of the above strategies and use Dissect to trace the requests.


Dissect provides default set of SDKs for developers to quickly boot-strap integration with the Request Stack Trace.

Instrumentation SDKs

Framework Supported Model

The following SDKs support instrumenting your code with Trace. All the SDKs listed below support Java and Node.js implementations.

  • Interceptors/Filters
    • HTTP request Interceptors (or) filter to intercept all the incomming HTTP request and create a Trace.
  • Client Handler
    • Client Handler wraps the outbound HTTP client calls and creates a Span transaction for the Trace context.
    • Client Handler also maintains lifecycle of the Span.
  • JMX Interfaces
    • Request Stack Trace provides support for on-demand sampling using JMX interfaces.
    • Provides configurations to setup or update Reporters.

Collector aka Reporter SDKs

The collector, aka the Reporter SDK, is responsible for transporting the Dissected requests and reporting to the back-end infrastructure.

  • Dissect Reporters
    • Reporter is an interface that helps to ship the collected Traces out of the system to a central storage. The default reporter is a simple log line reporter.
    • Dissect out of the box supports Kafka Messaging as default reporter.
    • Dissect uses the Avro entity as the data format to transport data.
    • ElasticSearch is used as the warm datastore to ingest warm data consumed from Kafka streams.

    The implementation at Ebay uses Rheos as the datastore and Pronto scaleable ES as the warm storage to query and aggregate results.

Visualizing Traces

We use Kibana dashboards to surface all the collected traces, and a simple waterfall view to visualize a selected Trace.

Aggregation Traces

Visualizing every single Trace is important, but it is not practical to drill through every single Trace. We are evaluating Druid to provide richer and faster aggregation across the collected Traces.

OpenSource Alternatives

When we evaluated the Open Source alternatives to instrument the libraries, we found it compelling to use the Open Source libraries and the polyglot support they offer:

 Alternative Polyglot Support Comments
Apache HTrace  No Incubator status, and supports only JVM
OpenTracing Yes Too new at the time I started this effort. We are evaluating OpenTracing for the next iteration.
Spring Sleuth No Too tightly ingrained towards the Spring eco-system. Obviously Spring alone.
Wingtips No  Too new, and only tailored HTTP Request Filters.
Zipkin Yes Promising, but most of the constructs are RPC driven. Zipkin Brave was good, but we would have had to do lots of refactoring to adapt it to the existing Spec.

After evaluating the Open Source alternatives, we decided to go with a standard implementation that suits the standard spec that is already defined and flowing all the sub-systems. A few reasons we made the instrumentation libraries separate implementations include:

  1. Instrumentation needed to confirm to [wwww.ebay.com][eBay] instrumentation spec:
  • Didn’t want to adapt a library and try to retrofit to the standard. Rather, we will come up with a standard instrumentation library that adapts to the existing spec and the data flow.
  • Ability for the instrumentation modules to have eBay internal monitoring and operatibility built in. These are plugins that can be injected and operationalized to the eBay infrastructure.
  1. The ability to instrument based on a custom sampling strategy
  • eBay uses sophisticated and robust A/B sampling strategies, and we wanted to leverage this strategy and provide the ability to supply it as a custom sampling strategy.
  1. Reporters
  • Reporters need to emit enough information about monitoring and operatibility in the eBay infrastructure. Dissect internal reporters are built with possible hooks and are integrated with the eBay infrastructure.
  1. We wanted to take ideas from OpenTracing on semantics and other conventions. OpenTracing was too new for us to pilot a few production-level applications, and we didn’t want external dependencies to derail our timelines.
  2. Lastly, building a Instrumentation library is easy and quick as long as the standards are defined clearly.

Our goal for designing Dissect is to fill the a Service Tracing need. eBay has sophisticated log, monitoring, and event systems. Dissect complements the missing piece in the puzzle, Service Tracing.

Status of Dissect

Dissect has been widely accepted as a need of the hour inside eBay. Currently, checkout microservices have onboarded with Dissect, and they are extremely happy to see the results. Checkout is a super critical flow for eBay, which explains how important the product is.


Large projects and initiatives aren’t successful without the help of many people. Thanks to:


  1. Dapper

Introducing Regressr – An Open Source Command Line Tool to Regression Test HTTP Services


In the Artificial Intelligence-Human Language Technologies team at eBay, we work on software that powers eBay’s conversational bot, ShopBot. We ship software daily to production that makes our bot intelligent, smarter, and more human. As a crucial part of this effort, we have to make sure any regressions are caught quickly and fixed to help keep our customers doing what they love – making purchases on ShopBot.

ShopBot’s backend is built on a polyglot suite of Scala, Java, and Python-based microservices that work in unison to provide ShopBot’s functionality. Hence, many of the crucial services need to be regression tested before we can release a new version to production.

To help with that effort, we built and are open sourcing our regression testing tool, Regressr.

Why Regressr

We looked at the common approaches that are widely used in the industry today to build an automated regression testing suite. In no particular order, they are listed below.

  • Comprehensive JUnit suite that calls two versions (old and new) of the service and compares the minutiae of the responses – JSON elements, their values and the like.
  • Using SOAP UI’s Test Runner to run functional tests and catch regressions as a result of catching functionality failures.
  • No regression tests. Wait for the front-end to fail as a result of front-end regression tests in dev or test, and trace the failure to the backend.

We also looked at Diffy and were inspired by how simple it was to use for catching regressions.

We had some very unique requirements for testing eBay ShopBot and found out that none of these tools provided the features we wanted:

  1. Super-low ceremony: Must quickly be able to productionize and gain significant value without too much coding or process.
  2. Low conceptual surface area: An engineer should be able to grok what the tool does and use it quickly without going through bricks of manuals and frameworks.
  3. Configurability of comparisons: We want to able to specify how the response should be compared. Do we want to ignore JSON values? Do we want to ignore certain elements? What about comparing floating point numbers, precision, etc.?
  4. Compare at high levels of abstraction: We want to capture high-level metrics of the responses and then perform regression testing on them. For example, we would like to be able to say the number of search results in this response were 5 and then use that number to compare against future deployments.
  5. Low maintenance overhead: We want maintenance of the regression suite to have low or negligible coding effort. Once every deployment is approved for release, we just want the suite to automatically capture the current state of the deployment and use that as a reference for future deployments.
  6. CI/CD Integration: Finally, we wanted this suite to be hooked into our CI/CD build.

We built Regressr specifically to solve these requirements, so that the team can focus on the important stuff, which is serving great experiences and value to our customers who use ShopBot.

Regressr is a Scala-based command line tool that tests HTTP services, plain and simple. We built it to be really good at what it does. With Regressr, you can use the out-of-the-box components to get a basic regression test for your service up and running quickly and gain instantaneous value, while coding regression tests that will cover close to 100% of the functionality in a more delayed fashion as time permits. Finally, Regressr doesn’t even need the two services to be up and running at the same time, as it uses a datastore to capture the detail of the baseline.

Regressr works in two modes:

  1. Record – Use Record when you want to capture the current state of a deployment to be compared as the baseline for later deployments. A strategy file is specified that contains the specifics of what needs to be recorded.
  2. Compare/Replay – Compares the current state of a deployment with a baseline and generates a comparison report.

The image below captures what is done in these two flows.

The Strategy File

The strategy file is the configuration that drives what happens during a record and a compareWith execution.

An example strategy file that posts two requests and performs regression testing is specified below:

  baseURL       : http://localhost:9882/endpoint

  Content-Type    : application/json


  - requestName: say_hello
    path: /say_hello
    method: GET
    recorder: org.ebayopensource.regression.internal.components.recorder.SimpleHTTPJSONRecorder
    comparator: org.ebayopensource.regression.internal.components.comparator.SimpleHTTPJsonComparator

  - requestName: shop for a pair of shoes
    path: /shopping
    method: POST
    requestBuilder: org.ebayopensource.regression.example.ExampleRequestBuilder
      conversationId     : 12345
      keyword            : Show me a pair of shoes
      mission_start      : yes
    recorder: org.ebayopensource.regression.internal.components.recorder.SimpleHTTPJSONRecorder
    comparator: org.ebayopensource.regression.internal.components.comparator.SimpleHTTPJsonComparator

  - requestName: say goodbye
    path: /goodbye
    method: POST
    requestBuilder: org.ebayopensource.regression.internal.components.requestBuilder.StringHTTPBuilder
      payload            : '{"mission" : "12345", "keyword" : "good bye", "mission_start" : "no" }'
    recorder: org.ebayopensource.regression.internal.components.recorder.SimpleHTTPJSONRecorder
    comparator: org.ebayopensource.regression.internal.components.comparator.SimpleHTTPJsonComparator

The Components

The important parts of the strategy file are the different components, RequestBuilder, Recorder, and Comparator.

RequestBuilder is used to specify how the request should be built in case of a POST or a PUT request.

The interface for RequestBuilder accepts a Map of Strings and outputs the payload that will be sent in the request.

abstract class RequestPayloadBuilder {

  def buildRequest(dataInput: Map[String, String]): Try[String]


Recorder is used to specify what parts of the response should be recorded for future comparison. Regressr injects all parts of the response to the Recorder during this time.

The interface for Recorder accepts a List of HTTPResponses (most of the time this will be one) and return a RequestRecordingEntry.

The RequestRecordingEntry is a holder for a value that will be recorded in Regressr’s datastore. The response code can be stored in a RequestRecordingEntry. Similarly a JSON response can be stored in a RequestRecordingEntry. You can also do some computation on the JSON and store a number (like the number of search results).

The interface for Recorder looks like the below.

protected def record(responses: Seq[HTTPResponse]) : Try[RequestRecordingEntry]

Finally, the Comparator is used to specify the details of comparison during the compareWith mode. How do you want to compare JSON’s? What about strings?

The interface for Comparator looks like the below. It accepts both the recorded RequestRecordingEntry and the current one and returns a List of CompareMessages which will be included in the comparison report.

abstract class Comparator {

  def compare(recorded: RequestRecordingEntry, replayed: RequestRecordingEntry): Try[Seq[CompareMessage]]


Regressr comes with out-of-the-box components that can be plugged in to provide significant value instantaneously for many common types of services. However, you can write your own components implementing these interfaces and include them into Regressr (Use ./regressr.sh -r to build everything)

The comparison report is generated at the end of the compareWith lifecycle and looks like this:

Testing HATEOAS services

HATEOAS (Hypermedia As The Engine Of Application State) is where some classes of RESTful services tend to go to, especially when there are lightweight GUIs in front of them that mimic the conversation which happens to the service. Regressr also supports simple and efficient breadth first traversal of HATEOAS resources for regression testing.

We support this through the use of a new component class called as Continuations.

Let’s imagine you have a shopping cart service exposed at a URL such as /shoppingCart/items.

When issued a GET request on this URL, if the services is modeled on HATEOAS principles the results will be similar to:

    "items": [
        "item1": "http://<host>/shoppingCart/items/<item-id>/",
        "item2": "http://<host>/shoppingCart/items/<item-id>/",
        "item3": "http://<host>/shoppingCart/items/<item-id>/"

As you can imagine, these new requests are non-deterministic and cannot be modeled with the help of Regressr’s configuration files, because the data may change over time.

That is where Continuations come in. With continuations, the tester can specify how many new requests should be created programmatically based on the response of a previous service call.

This allows the tester to write a continuation generically that creates new requests based on how many items were present in the response of the /items call.

An example of continuations is here.

What’s Next

  1. Maven plugin that attaches to Regressr that can be used in a CI/CD build.
  2. Jenkins plugin for Regressr report.
  3. Global comparators that can be used to capture global metrics across requests and compare them.

Conclusion and Credits

We have found Regressr to be a very useful regression testing tool for lean and low ceremony engineering teams that wish to minimize effort when it comes to regression testing of their services.

There were many people involved in the design, build and testing of Regressr without which this could not have been possible. Recognizing them: Ashwanth FernandoAlex ZhangRobert EnyediAjinkya Kale and our director Amit Srivastava.

Comments and PRs are welcome with open hands at https://github.com/eBay/regressr.