Saturday, December 20, 2014

Inflicting Bad Things on Protractor for Great Polymer Good


I never seriously considered Protractor as a possible Polymer testing solution. For the last few days, I have been messing around with it in an effort to understand Protractor and what I might want in a real end-to-end testing framework for Polymer. But at no point did I think that Protractor might be anything close to a real solution. Until last night.

Using Page Objects, I wrote the following test:
describe('<x-pizza>', function(){
  beforeEach(function(){
    // (1) Get the page containing my Polymer element (served
    //     by Python simple HTTP server)
    browser.get('http://localhost:8000');
    expect($('[unresolved]').waitAbsent()).toBeTruthy();
  });
  it('updates value when internal state changes', function() {
    // (2) Tell the element on the page to add pepperoni
    new XPizzaComponent().
      addFirstHalfTopping('pepperoni');

    // (3) Verify pepperoni was added
    expect($('x-pizza').getAttribute('value')).
      toMatch('pepperoni');
  });
});
From a readability (and hence maintainability) standpoint, it does not get much better than that. Load the page, add something to the component, assert that the component changed as expected.

To be sure, this hides a fair bit of complexity, including some ugly hackery that works around Protractor/WebDriver's lack of shadow DOM support. That lack of support was the main reason that I never thought Protractor could be a legitimate functional Polymer testing tool—the shadow DOM being a fundamental piece of Polymer elements and all. And yet, that test code is beautiful.

The page object test code even compares favorably to the Karma test code that I wrote for an earlier version of the same <x-pizza> element:
describe('<x-pizza>', function(){
  var container, xPizza;

  beforeEach(function(done){
    container = document.createElement("div");
    var el = document.createElement("x-pizza");
    container.appendChild(el);
    document.body.appendChild(container);

    xPizza = new XPizzaComponent(el);
    xPizza.flush(done);
  });

  describe('adding a whole topping', function(){
    beforeEach(function(done){
      xPizza.addWholeTopping('green peppers', done);
    });

    it('updates the pizza state accordingly', function(){
      expect(xPizza.currentPizzaStateDisplay()).
        toMatch('green peppers');
    });
  });
});
I have been perfectly content with this Karma solution. There is a little work to add the element to the page, but that is not a big deal. What is something of a big deal is the need to pass those done callbacks into the xPizza page object. Last night's <x-pizza> page object for Protractor was a direct copy of the page object for Karma—except that I removed the callback code.

Protractor's built-in promises eliminate the worry about Polymer's asynchronous nature—and it is a complex worry. I am almost tempted to rewrite the page objects chapter in Patterns in Polymer for Protractor. Almost.

The problem is a tradeoff of conceptual complexity. If I stick with Karma, all of the testing framework complexity is described in the unit test chapter—and I will have already discussed dealing with the async nature of Polymer in that chapter. In other words, a Karma page objects chapter is just about pages objects. If I opt for Protractor, I have to (1) introduce Protractor, (2) detail how its asynchronous support helps with Polymer, (3) explain that, because of Protractor's lack of shadow DOM support, readers can never use built-in WebElements, and (4) describe how to workaround Protractor's lack of shadow DOM support. Oh yeah, and I have to explain what Page Objects are.

I still might do just that, but I need more experience with this before I make that call. I think I would like to start by improving the setup code:
  beforeEach(function(){
    browser.get('http://localhost:8000');
    expect($('[unresolved]').waitAbsent()).toBeTruthy();
  });
The expect() is a bit awkward. Even experienced Polymer… uh ers? Sure, why not. Even experienced Polymerers would find that hard to read—even though they know that the unresolved attribute is removed by Polymer when it initializes to deal with FOUC.

I would much rather just write:
  beforeEach(function(){
    browser.get('http://localhost:8000');
  });
And I can can do just that by mucking with the browser object in the Protractor configuration's onPrepare block:
exports.config = {
  // ...
  onPrepare: function() {
    browser.ignoreSynchronization = true;

    browser._originalGet = browser.get;

    browser.get = function(url) {
      browser._originalGet('http://localhost:8000');
      expect($('[unresolved]').waitAbsent()).toBeTruthy();
    };
    // ...
  }
}
Is that horrible? I am rewriting the very important browser.get() to do something very different. I think I am OK with that. The first onPrepare line has already set this test run on a very different, non- AngularJS path than it would otherwise be on. If I am in for a penny, I might as well embrace the pound.

Especially if the pound gives me tests like:
describe('<x-pizza>', function(){
  beforeEach(function(){
    browser.get('http://localhost:8000');
  });
  it('updates value when internal state changes', function() {
    new XPizzaComponent().
      addFirstHalfTopping('pepperoni');

    expect($('x-pizza').getAttribute('value')).
      toMatch('pepperoni');
  });
});
That is one pretty pound.


Day #30

No comments:

Post a Comment