The Future of Marko

At eBay, we’ve completely transformed how we build web applications, starting with the transition from a Java-based stack to a Node.js-based stack. Node.js has enabled teams to move faster, and it offers an abundant ecosystem of tools and libraries that are essential to modern web application development.

We built Marko, a library for building UI components with minimal boilerplate, five years ago and it has evolved based on feedback from our vibrant and growing community. Marko is completely open source and to ensure that it remains a healthy open source project, we are thrilled to announce that eBay will be contributing Marko to the JS Foundation.

Marko will continue to be a key component of eBay’s web application development stack. It takes care of automatically and efficiently updating the DOM in response to data changes. On the server, Marko takes advantage of the asynchronous and streaming primitives provided by Node.js to greatly accelerate the performance of eBay’s pages, ensuring shoppers are getting the fastest experience available when browsing on eBay.

Joining the JS Foundation

At eBay, we were founded with the core belief that we should use technology to empower and connect people globally. In the technology world, we’re a core contributor to and believer in open source technology. Not only does a company culture of open source help us empower our developers, but it also enables our technologists to collaborate across the organization and with peers across the industry. eBay is a member of the Linux Foundation (including the Cloud Native Computing Foundation and the Open API Initiative) and will continue to actively participate in the open source software community.

So what does it mean for Marko to join the JS Foundation? First off, with nearly 20,000 UI components within eBay, we are committed to evolving Marko and expanding the surrounding ecosystem. The Marko core team members that are employed by eBay will continue to maintain and lead the project.

As part of the JS Foundation, Marko will reside alongside other notable projects such as webpack and Mocha. By moving Marko to the JS Foundation, we feel that we will be able to more closely align with other projects in the JavaScript ecosystem. In addition, we want to make it clear that Marko has and always will be open to outside contributions and outside maintainers. While we have seen great growth in the Marko community, we believe there is still a lot of potential yet to be unlocked. Through neutral governance and close ties with other prominent projects, we believe the JS Foundation will allow the Marko community to grow and flourish.

Early history

Marko has a long history within eBay that dates back to 2012, when we started exploring using Node.js as our web application development stack. This was at a time when JavaScript HTML templating was starting to take off. At eBay, server-side rendering was very important, and we wanted support for UI components that provided encapsulation of rendering logic, client-side behavior and styling, and progressive and asynchronous HTML rendering (features that we had on our previous Java-based stack). Dust.js was used by a few teams because it offered streaming and asynchronous rendering, but it lacked support for UI components. Dust.js also provided very few hooks to optimize templates at compile-time, and it promoted what we considered the bad practice of global helpers. eBay open sourced a JavaScript toolkit named RaptorJS that included a very early version of Marko called Raptor Templates. RaptorJS is now defunct, but many of the modules that were part of RaptorJS now live on as independent projects (including Marko).

Marko has evolved a lot over the years. While Marko has always had very strong performance on the server and support for basic UI components, many other features came later and were inspired by other UI libraries/frameworks. For example, after React was announced and gained popularity due to virtual DOM (VDOM) rendering and diffing, we also introduced VDOM rendering and DOM diffing/patching into Marko to avoid manual DOM manipulation. However, unlike with React, the Marko VDOM was and will continue to be an implementation detail that could change at any time. Support for single file UI components was inspired by a similar feature found in Vue and Riot.js. Marko has always aimed to stay competitive with other UI libraries by innovating and closely following industry trends while also focusing on keeping the runtime fast and small.

Marko is now heavily used within eBay, and it is also starting to be used by outside companies, startups, government agencies and educational institutions. The Marko ecosystem has continued to grow and is now supported in many different IDEs and editors and on services like GitHub. The core Marko team has continued to grow, and it consists of a mix of eBay employees and outside developers.

Project roadmap

Asset pipeline integration

Delivering JS, CSS, images, and other front-end assets to the browser is a fundamental requirement of building any web application. As such, we believe Marko should offer first-level support for an “asset pipeline” to simplify the build chain that most developers are used to.

At eBay, we do not have a separate build step. Instead, at runtime we generate all of the JavaScript and CSS bundles required to make the page function. In addition, our tools automatically inject the required <script> tags into the page body and the required <link> tags into the page head. Furthermore, front-end assets such as images and fonts automatically get uploaded to the eBay Resource Server that backs our Content Distribution Network (CDN). We want to introduce this ease of use to all users of Marko.

Progressive Web App (PWA) samples

Progressive Web Apps offer a compelling user experience that is reliable, fast, and engaging. We want to help more developers build PWAs and will be rolling out more sample PWAs built on Marko to help guide developers.

Language Server support

Integrations with editors and IDEs is a challenge for any new language or framework. We have implemented advanced support for the Atom editor, including autocomplete of both core and user-defined tags, hyperclick to jump to tag definitions, and more. But for other editors, we only provide basic syntax highlighting.

Microsoft’s Language Server Protocol gives us the opportunity to write this advanced functionality in a way that can be shared across a growing number of editors.

Improved error messages in development

Compiler checks have been used to improve the developer experience: things like misspelled tag names or using deprecated features.  And while these checks are pretty comprehensive, there are certain checks that can only be done at runtime.

In the past, we have kept runtime code size small and fast by limiting error messages and runtime error checking. We recently updated Marko to support both a development mode and a production mode, and now we want to take the logical next step to add additional code to our runtime that will provide much friendlier error messages and catch problems earlier.

UI component marketplace

Starting a new web application can be daunting, but having an arsenal of UI components to choose from can be a huge time saver. While it can be challenging to create a UI component that works well for every application, we believe it is extremely helpful for developers to showcase their UI components, even if they are to be forked and adapted for slightly different use cases.

With nearly 20k components at eBay, we want to make it easy for our own developers to find the right component for the job, and we’d like to extend this marketplace to the open source community to make it easy to find quality components for use in your app.

We are excited about the future of Marko and look forward to building it with the support of the JS Foundation. If you are interested in learning more about Marko, you can get additional information on the Marko website. Join the conversation and contribute on GitHub.

– the Marko team at eBay


Patrick Steele-Idem

Patrick Steele-Idem is a Principal Engineer on the eBay Platform team and is co-leading eBay’s open source program. He is actively engaged in many open source projects (including Marko, Lasso, and morphdom). Patrick is the original author of Marko and is now leading the Marko core team and the Lasso core team.

 


Michael Rawlings

Michael Rawlings is a Senior Software Engineer on the eBay Platform team where he works closely with product teams to improve the way front-end applications are built. He enjoys building tools that improve the developer experience and make it simpler to build scalable and performant apps.  

 


Austin Kelleher

Austin is a Software Engineer on the eBay Platform team. He graduated from Penn State University in 2016 with a degree in Computer Science. Previous to joining eBay, Austin contributed to Marko and Lasso in his free time.

Tiered Test Automation

As application code has evolved from monolithic to client-server, and to now micro-services, test automation has to evolve as well. Test automation relies very heavily on user interface and web services endpoint layers. That makes the test pass rates low. These layers are flaky due to factors such as data dependencies and inconsistencies, environmental stability, slow execution, and expensive maintenance of the test code.

In a Tiered-Level Test Automation approach, the test code is written to follow a test pyramid popularized by Martin Fowler, where there is a minimal amount of focus given to the user interface and user actions tests. As the test code is written upstream, more scenarios and test permutations are written on the other tier levels of the automation.

Black box testing

Black box testing tests the application in the eyes of the users. For manual testing, the test interacts with the page and verifies that the functionality is working as expected by visually checking the UI components and performing actions on them.

Customer Facing Public Website

Customer-Facing Public Website

White box testing

White box testing looks at the application code that is subject to test.

public List fetchOverridesFromConfig(IAppConfigContext context) {...}
public void applyOverrides(PageDefinition template, List moduleOverrides, ExpClassification expClassification) {...}

Unit testing

Unit testing looks at the smallest testable parts of an application. Test automation should be heavy on unit tests. We implement unit testing by creating tests for individual methods. The technology stack used in our project is comprised of Java, JUnit, Mockito, JSON, and ObjectMapper.

Implementation highlights

Our unit tests mock the dependencies, and pass them to the application method in question to run them through the business logic. Then they compare and assert whether the actual response is the same as the expected response that is stored in a JSON file. The following sample code is a unit test.

public void testFetchTemplateFromRepo() 
throws JsonGenerationException, JsonMappingException, IOException{
  CustomizationSample s = getSample(name.getMethodName());
  LookUpConfiguration.RCS_TEMPLATE_CONFIG = Mockito.mock(LookUpConfiguration.class);
  String templateString = mapper.writeValueAsString(s.getDefinition());
  Mockito.when(LookUpConfiguration.RCS_TEMPLATE_CONFIG.getValue(
               Mockito.any(RCSDictionaryKey.class),
               Mockito.any(IAppConfigContext.class)))
         .thenReturn(templateString);
  PageDefinition defition = helper.fetchPageTemplateFromConfig(
     mockDataFactory.manufacturePojo(PageTemplateRepoConfigContext.class));
  Assert.assertEquals(defition.toString(),s.getOutput().toString());
}

public void testPageDefinition(){
  CustomizationSample s = getSample(name.getMethodName());
  helper.applyOverrides(s.getDefinition(), s.getOverrides(), ExpClassification.ALL);
  Assert.assertEquals(s.getDefinition().toString(), s.getOutput().toString());
}

Integration mock tests

The strategy and rationale for white box testing is to find out more of the issues and problems upstream. Integration tests can be written to verify the contracts of web services independently from the whole system.

For example, if a configuration system is down or data is wiped out, does that mean the application code cannot be signed-off because the tests are failing? One test implementation is mocking the values from the configuration system, and then using them as the dependencies to run through the integrated business logic and assert whether the actual response is the same as the expected. The technologies used in the mock tests include JUnit Parameterized Tests, MockIto, PowerMockIto, JSON , and GSon.

Implementation highlights

How are the unit tests different from the integration tests? The unit tests run through the individual methods, while the integration tests can call those methods all together.

@Test
public void testResponse() throws IOException {
  PageDefinition pageDefinition = getPageDefinition
     (rcsConfigurationContext, true, true);
  new PageDefinitionValidator.Validator(pageDefinition)
     .validateNull()
     .pageDefTemplate()
     .modules();
}

private PageDefinition getPageDefinition(RCSConfigurationContext context, 
boolean customized, boolean includeToolMeta) {
  ...
  template = PageDefinitionHelper.getInstance()
     .fetchPageTemplateFromConfig(context);
  PageDefinitionHelper.getInstance().populateModuleInput(
     template, context, includeToolMeta);
  if (customized) {
     List overrides = PageDefinitionHelper.getInstance()
        .fetchOverridesFromConfig(context);
     PageDefinitionHelper.getInstance().applyOverrides(
        template, overrides, expClassification);
  }
  return template;
}

Services endpoint tests

The services layer tests are run against the RESTful endpoints with only a few of the positive scenarios and invalid requests. Then the responses are validated with the expected data values, error codes, and messages. The technologies used are Java, Gson, and Jersey API.

Implementation highlights

Use JSON to store the request and response, and parse and pass them as a data provider into the test methods.

{
 "getCustomizationById": [{
     "marketPlaceId": "0",
     "author": "SampleTestUser2",
     "responseCode": "200"
   }
 ],
 "negative_getCustomizationById": [{
     "marketPlaceId": "0",
     "customizationId": "",
     "domain": "home_page",
     "responseCode": "500",
     "errorMessage": "Invalid experience usecase."
   },...

public void testPageDefinitionService(String url, Site site, 
String propFile, JSONObject jsonObject) throws Exception {
  // Build the context builder from JSON
...
  // Create Customization
  Response createCustomizationResponse = CustomizationServiceClientResponse
     .createCustomization(url, propFile,
     jsonObject.getString("author"),
     jsonObject.getString("experienceUseCase"),
     xpdContextBuilder, xpdModuleOverridesList);
  // Get the customization By context
  String customizationId = CustomizationServiceClientResponse
     .getCustomizationIdByContext(url, propFile,
     jsonObject.getString("experienceUseCase"), xpdContextBuilder);
  // Validate if the module override is applied on pageDefinition
  new PageDefServiceResponseDataValidator
     .Validator(response, xpdModuleOverridesList)
     .moduleData()
     .moduleLocators();

UI Tests

User Interface test automation is minimal and focuses on actions on the page, for example, editing and saving the page after making the customization changes. The technologies used are Java, Selenium WebDriver, HTTPClient, JSON, and Gson.

Implementation highlights

The UI tests iterate through a collected list of web elements and actions, rather than individually taking each locator for each row and column. The save flow tests also integrate to the service response to verify against the source of truth. We are validating the data that is passed through the UI against the service response and comparing them to determine whether they both are equal. This strategy helps to validate more data on the fly rather than hard-coding the expected output.

The functional flow is also verified through the integration with services. For example, the edit and save flows are tested by getting the service response, using it as the source of truth to validate the data that is saved, and verifying whether they are equal. Once again, this approach helps to validate more data on the fly rather than hard-coding the expected output in the test class or in some properties file.

A snippet of the Internal Tool's UI that is the subject of the test

A snippet of the rows and columns of the Internal Tool’s UI that is the subject of the test


public Validator save() {
   SaveServiceAPI saveService = new SaveServiceAPI();
   saveCustomization.clickCancelEdits();
   saveCustomization.clickRestoreDefaults();
   saveCustomization.editModuleBeforeSaving(content(propFile, "SAVE_ON_TITLE"));
   saveCustomization.clickSave();
   assertChain.string().equals(content(propFile, "SAVE_ON_TITLE"),
   saveService.getModuleTitle(content(propFile, "SAVE_REQUEST_URL")),
      "Saved Title in the UI doesn't match with the Title in Service response");
   return this;
}

Conclusion

Discovering issues upstream is more efficient and less expensive than finding them when the product is already developed and in production. The Tiered-Level Test Automation approach encourages developers to think and sets an example for where and when it is best to test the product.

Implementing the tiered test automation was indeed a collaborative effort. Thanks to my colleagues, Kalyana Gundamaraju, Srilatha Pedarla, Krishna Abothu, and Manoj Chandramohan for their contributions to this test automation design.

Ann Del Rio