Wednesday, July 26, 2017

Making a conference talk practical (for me)

I've again had the annual pleasure of talking *amazing* people from around the world, both seasoned speakers and new, and get inspired by their stories. It pains me to know that a small percentage of all the awesome people can be selected, and that our specific selection criteria of practical relevance makes it even harder for many people. Simultaneously, I'm delighted to realize that while I may say no on behalf of European Testing Conference 2018, I could help those people make their proposals stronger for other conferences.

Today, however, I wanted to write down my thoughts on what is a talk that is practical, to me.

I've had the pleasure of listening to lots of presenters on lots of topics, and over time, I've started recognizing patterns. There's one typical talk type, usually around themes such as security testing, performance testing, test automation and shifting work left that I've categorized into a talk about importance of a thing. This is one where the core message is selling an idea: "bringing testers into the whole lifecycle in agile is important". "Test automation is hard and important". "Performance testing continuously is important".

I get this. Important. But I know this. My question is, if it is important, what do I do. So here are stories I'd rather hear that make this practical.

1) I sort of knew X was important,  but we did not do it. We failed this way. And after we failed, we learned. This is specifically what we learned and how we approached solving the problem. So, learn from my mistakes and make your own.

2) I'm an expert in X, and you may be an expert or a novice. But if you try doing X, you could try doing this specific thing in X in this way, because I find that it has been helpful.  This answers your questions of how after quickly introducing what and enables you to leave this conference knowing what you can do, not just that you will need to do something.

3) Here's a concept I run into. Here's how I applied it in my project, and here's what changed. Here's some other concepts we're thinking of trying out next.

Assume important or necessary is a prerequisite. What would you say in your talk then? 

Tuesday, July 25, 2017

Greedy Speakers are the Death of conferences

Conference organizing is hard work. Lots of hours. Stress over risks. '

But it's also great and amazing. Bringing people together to learn and network makes me feel like I'm making a small difference in the world.

And for me in 2017 it has also been losing some tens of thousands of euros on organizing a conference that was totally worth the investment, regardless. 

I organize conferences idealistically. My ideology is two-fold: 
  1. I want to change the world of conferences so that money isn't blocking the voices from getting to stage. 
  2. I want to raise money to do more good by supporting speakers for conferences that don't pay the speakers.
I also organize without raising money, and I've made organizing without any money a form of art for myself in the last 15 years. But that's local meetups, and I do a lot of them. I have four coming up in the next month. 

I'm tired of conferences, where majority of speakers are vendors, because they have an interest in paying for the speaking. I want to hear from practitioners, and sometimes consultants if they keep the selling to a minimum. Bottom line is that all speakers have something to sell anyway, their personal brand if nothing else.

I would like to believe that conference going is not a zero sum game, where choosing one is away from the other. People need places where to share, and there's a lot of people to listen to various perspectives. But I also feel that people need to make choices in which conference they go to, with their limited budget. Cheap conferences are great, it enables your organization to send more people out. But conferences are cheap if the money comes elsewhere. And this elsewhere is sponsors and speakers as sponsors, paying their own way to work for the conference.

Being able to afford the cost is a privilege not everyone has. I would like to see that change and thus support the idea of Not Paying to Speak at Conferences. And this means travel + hotel paid. No fancy expense accounts, not even paying for the hours of work to put into the talk you're delivering, but taking away the direct cost.

Conferences that don't pay but yet seek non-local voices have made a choice of asking their speakers to sponsor them and/or the audience (if truly low-cost). If they're explicit about it, fine.

The could choose to seek local voices so that travel and expenses are not relevant. But they want to serve the local community with people's voices that travel, and people (who can afford the travel in the first place) have the freedom to make that choice. The local community never has a chance of hearing from someone who won't travel. They haven't heard that voice before, and still won't. And the ones who can't afford (I was one!) can be proud and choose to remain local, rather than go begging for special treatment. Some people don't mind asking.

I wrote all of this to comment on a tweet:
I've been told that travel expenses for the speakers and in particular paying the speakers is the death of commercial conferences too. They need to pay the organizers salaries. It's a choice of ticket pricing and who gets paid first. Local conferences don't die for travel expenses, if they work with local speakers. But they tend to like to reach out to "names" that could bring news from elsewhere to this local community.

The assumption is that a higher ticket price is death of a conference. It's based on the idea that people don't value (with money) the training they're receiving. Perhaps that is where the change needs to be - expecting free meals.

I can wholeheartedly support this: 
Do that even if you're not a first time speaker. There's nothing wrong with building your local community through sharing. It might give you more than the international arenas.

Greedy speakers are not the death of conferences. There's conferences with hugely expensive professional speakers that cost loads, and still fill up. If anything is death of conferences, it's the idea that people are so used to getting conferences free that they don't pay what the real cost of organizing a *training* oriented conference is.

Luckily we have open spaces where everyone is equal and pays. We're all speakers, all participants. Conferring can happen without allocated speakers, as people meet.

Saturday, July 22, 2017

A Team Member with Testing Emphasis

Browsing Twitter, I came across a thought-provoking tweet:
Liz Keogh is amazing in more ways that I can start explaining, and in my book she is a programmer who is decent (good even) at testing. And she understands there's still more - the blind spots she needs someone else for. Someone else who thinks deeply and inquisitively. Someone else who explores without the blind spots she has developed while creating the code to work the way it's supposed to.

Liz is what I would call a "team member with programming emphasis". When asked to identify herself, no matter how much she tests, she will identify as a programmer. But she is also a tester. And many other things.

Personally I've identified as a "team member with a testing emphasis". That has been a long growth from understanding why would someone like Ken Schwaber years and years ago suggest to my manager that I - who want to be a tester - should be fired. Over thinking about it, I've come to the conclusion that this is one of the ways to emphasize two things:

  1. We are all developers - programmers, testers and many others 
  2. We need to work also outside the focus silos when necessary or beneficial
For years, I did not care so much for programming so I found a way to call myself that I was more comfortable with than a "developer" which still is loaded heavily on programming. I became a self-appointed team member with a testing emphasis.

This works still, as I've grown more outside my tester box and taken on programmer tasks. It means that while I code (even extensively, even production code not just test code) the tester in me never leaves. Just like the programmer in Liz never leaves. 

Liz can be a brilliant tester she is in addition. And I can be a brilliant programmer I intend to be. And yet she can still be the programmer, and I can still be the tester. 20+ years of learning allows growth outside the boxes.  But it's still good to remember how we got here. 

If software industry doubles every five years, half of us have less than five years of experience. Perhaps it makes sense to learn a good foundation, starting from different angles and build on it. 

Individuals make teams. And teams are stronger with diversity of skills and viewpoints. 



Automation tests worth maintaining

A retrospective was on it's way. Post-it's with Keep / Drop / Try were added as we discussed together the perspectives. I stood a little on the side, being the loud one, leaving room for other people's voices. And then one voice spoke out, attaching a post-it on the wall:

"It's so great we have full test automation for this feature"

My mind races. Sure, it's great. But the automation we have covers nothing. While creating it for the basic cases, we found two problems. The first one was about the API we were using being overly sensitive to short names, and adding any of those completely messed up the functionality. I'm still not happy that the "fix" is to prevent short names that otherwise can  be used. And the second one was around timing when changing many things. To see things positively, the second one is a typical sweet spot for automation to find for us. But since then, these tests have been running, finding nothing.

Meanwhile, I had just started exploring. The number of issues was running somewhere around 30, including the announce of the "fix" that made the system inconsistent and I still deem as a lazy fix.

I said nothing but my mind has been racing ever since. How can we have such differences of perspectives on how awesome and complete the automation is? The more "full" it's deemed, the more it annoys me. I seek useful, appropriate and in particular over longer time not just on time of creation.  I don't believe full coverage is what we seek.

I know what the automated tests test, and I often use those as part of my explorations. There's a thing that enables me to create lists of various contents in various numbers, and I quite prefer generating over manually typing this stuff. There's simple cases of each basic feature, that I can run with scripts and add then manually aspects to what I want to verify in exploration. I write a lot of code, extend what is there but I rarely check in what I have - only if there was an insight I want to keep monitoring for the longer term future.

Cleaning up scripts and making them readable is work. Maintaining them when they exist is work. And I want to invest in that work when I believe the investment is worthwhile.

The reason I started to tell this story is that I keep thinking that we do a lot of harm with the "manual" vs. "automated" testing dichotomy. My tests tend to be both. Manual (thinking) is what creates my automation. Automation (using tools and scripts) is what extends my reach in data and time.

Tests worth maintaining is what most people think with test automation. And I have my share of experience of that through experimenting with automation on various levels. 

Wednesday, July 12, 2017

Is Mob Programming just Strong-style Randori?

Back in the days before Mob Programming was a thing, there was a way of deliberate practice referred to as Randori. The idea there was pretty much similar to what the mechanics of mobbing are. There would be a pair out of a group working at a time on a problem, and then you'd rotate.

My first Randori experience was a long time before I ever heard someone was working in this "mob programming" style, and on a shallow level, the difference I saw from my first introductions to mob programming was the use of strong style navigation. So the question emerged: is mob programming really just a strong-style Randori?

I'm blogging since I listened in to a discussion where Llewellyn Falco was explaining a saying he likes:
Pool is not just a bigger bath tub.
Surely, pool is a container with water in it. So is a bath tub. But the things you can do with a pool are significantly different from the things you can do with a bath tub.

Examples popped out: there's such a thing as a pool guard, but it would make no sense to have a tub guard. Pool parties are a thing, but you might argue that a tub party is a very different thing. The physical exercise aspects of pools are non-existent in tubs, you wouldn't really say you swim in a tub.

While it is a fun word game to make one think, it is a good way of illustrating why mob programming is not just a Strong-style randori. What mob programming as Woody Zuill and his team introduced it brings in is hundreds of hours of learning while continuously collaborating, and with that communication some of the problems we see no ways of making go away just vanishing.

Doing things over long time to grow together make it different. Mob Programming is different.

And the same applies to many of the things where we like to say that this "has been around for ever". Test after and test first are essentially different. The things we can do with continuous delivery are essentially different to just continuous integration.

Tuesday, June 27, 2017

Incompatible cultures

A few weeks back, I started a talk on introducing myself as someone who is not officially responsible for anything, which makes me unofficially responsible for everything. I also talked about how with working in self-organized teams, I find myself often identifying the gaps and volunteering for things that would otherwise fall between.

I'm a big believer in self-organization, and people stepping up to the challenges. I know self-organized teams make me happy, and I wouldn't care to work in any other way.

A lot of communication is one on one, so to talk to my team, I've come to accept that the discussion can come through any of my team mates. There's no "I must be invited to a meeting", but there's "the team representation needs to be present in the meeting". We learn from each other a lot on what questions the others would like answered, and a lot of times whoever acts on the information is the best person to be in the discussion, over someone with assigned power.

I've seen what assigned responsibilities do: they create silos and bottlenecks, that I spend time bringing down. And yet, culturally some people just can't believe there is such a thing as self-organized team - there must be a responsible individual.

I run into this collision of ideas today, as I was seeking a bigger research->delivery task for my team to complete during the difficult summer period when some are here and some are away, and lack of shared responsibilities really shows its ugliest side. As I was asking, I heard that one of my team members has been "assigned responsible" for the research, and the rest of us just do tasks he assigns.

I felt the urge of fleeing. Instead, I wrote this down as a reminder for myself to work more on what I believe an efficient R&D to be: self-organized, with shared responsibilities.

I wonder if that will ever fit the idea of "career advancement" and "more assigned responsibility". Time will tell.

Minimizing the feedback loops

As summer vacations approach, I'm thinking of things I would like to see changed where I feel a recharge is needed before I can take up on those challenges. And I'm noticing a theme: I want to work on minimizing the feedback loops.

The most traditional of the feedback loops is to have the feature just implemented in the hands of the users. I keep pushing towards continuous releasing and related cultural changes in how we collaborate on making the changes that get published.

But it's not just pushing the changes for the end users to potentially suffer from. There's a lot of in-company feedback that I'd like to see improve.  I get frustrated with days like yesterday when all test automation was failing and I still fail to get introduced the changes that would stop the automation from failing from a single prerequisite outside my teams powers. People like walking on roads travelled before, when there would be opportunities for better if we seek out ways to do things differently.

The feedback loop that seems the hardest is the one of collaboration. We co-exists, in very friendly terms. But we don't pair, we don't mob and we don't share as I would like to see us share.

Maybe after the vacations, I will just push for experimenting while making others uncomfortable, in short time boxes. It's clear there are things to do that will make me uncomfortable alone as well, but the ultimate discomfort for me seems to be making others uncomfortable.

 

Monday, June 12, 2017

From avoiding tech debt to having tech assets

The question I always get when talking about mob programming is how could that be a better / more effective way of working than solo work. The query often continues with do you have research results on the effectiveness? 

As someone with a continuous empirical emphasis on my work as a tester, and someone with background in research work at university, I'm well aware that the evidence I care to provide is anecdotal. I have other things to do than research nowadays, and having done that I realize the complexities of it. And while anecdotes are research results, I can work with anecdotes.

One of the themes I like collecting and providing anecdotes on around mobbing is that to me it makes little sense to compare an individual task, but a chain of value delivery. Many times with mobbing, we end up with significantly less duplication of code, as someone in the group acts as the memory to tell that they are using something of that sort somewhere else.

Here's an anecdote I just today added to my collection: "QA person, where were you 9 hours ago when your knowledge would have saved us from all this work?". A team of programmers was mobbing, and wondering how to work on a particular technology. For everyone in the group, it seemed like there was some significant implementation work for somewhat of a scaffolding type of work, and the team set out to do that work. Later, another person became available to join the mob and with the knowledge available to them, eradicated all the work until that point, just having  the information available: an appropriate library for the scaffolding would already be available, and was used on the tests.

I've seen my own team talk around an implementation, starting with one strong idea, and ending up with the best of what the group had to offer. I've watched my team express surprise when days of work get eradicated with knowing the work has already been done elsewhere. I've watched them come to realization that whatever they would have implemented solo, would have been re-implemented to better match the ideas of architectural principles or the best use of common components.

I've also had chances of seeing a mob go through about ten solutions to a detailed technical problem just to find one with least tradeoffs between maintainability, performance and side-effect functionality.

A lot of times the best result - paying back in the long term - does not emerge ever from solo work. And that just makes the comparison of what did it take as effort to generate some value in mob vs. solo all the more difficult. It's not the task, it's not the delivery flow, but it's the delivery+maintenance flow that we need to be comparing.

Tuesday, June 6, 2017

Fill the Gap

About two weeks ago, business as usual, I installed a latest build to notice that clearly someone from some other team had worked on our user interface. Whatever we had done to make it nice enough had been replaced by problems I did not quite understand. Reporting the issue to offload, and focusing on other things of relevance. 

With communication through various steps on what was the status, we got the word that it would be fixed soon. Days passed, and soon wasn't soon enough. We finished another feature we needed to release, and a thing of temporary annoyance turned into release blocker. 

Friday afternoon, I decided to take a moment on the legwork to learn first that the developer making the changes left for a three weeks of vacation, and the second developer had very much partial knowledge on how the changes he would contribute made their way into the build. He also pointed out that he fixed "the issue" three hours ago and sent whatever he was doing over email to the one now on vacation. 

Asking around a little more, I learned that the thing was that was sent over email, and where it belonged - and that it was in place, yet problems still persisted. I learned to do the necessary tweaks there myself - all I needed was to know what to tweak. 

Monday started with fierce determination to get the problem over and done with. I sat down with the second developer to show him what I saw in the product, and he showed me what he saw in his component test environment. It because very obvious that the simulator he was running was not a match to the real end user environment with the problem. We narrowed down the problem into seven lines of CSS and eventually one line of CSS. 

The mystery started to unfold. The second developer would provide a piece of stylesheet that was correct. By the time it was in the product, it was incorrect. If it was as it was originally given, there would be no problem. 

Hunting down a bunch of Jenkins jobs in the pipeline, I learned the problem was on encoding a particular character that shouldn't get encoded. Speculating on the field that got encoded, we realized removing the encoding would have further effects. What came about was a funny one-hour experimenting with what could possibly work. The speculative solutions of hundreds of characters without a meaning and an argument about clear code vs. comments later, we found one that made sense and fixed it.

It all started with the idea of a bug that needed fixing. It continued to realizing that in a long chain of new and old pieces, ownership wouldn't be straightforward. And I did what we all do in our turn: identify a gap, fill the gap and collaborate on getting things forward. 

In addition to finding the gap, I sat next to people to get the gap filled. I don't need to be assigned responsible to be responsible. 

I could easily still be waiting but I'm not: I fixed the bug. 

Friday, May 26, 2017

Incremental steps to continuous releases

The last eight months for me have had one theme in particular that I consistently drive forward, in small steps that sometimes feel small enough that others don't realize how things are changing.

There's an overall vision in mind for me: I want to take us through the transformation to daily releases for the windows client + management backend product I'm working with.

Where I started from

As I joined 8 months ago, the team I joined that been working for several months on a major architectural type of change - no releases but a build that could be played with internally. We had "8 epics" to drive through the architectural changes, and none of those were done. There was a lot of dependencies all around and making a release someone would use wasn't a straightforward task.

I started in September. The first release went out November 23rd.

There's more than a decade of history on making continuous releases of the detection and cleanup functionalities within the product, but the frame of the product has been released annually or quarterly for production use, and monthly or biweekly for beta - something I was introducing here a decade ago.

When I started talking of daily releases, I was told it was impossible. It took me 4 months to get rid of the "it cannot be done" comments.

The pain of regularity is necessary


I had a firm belief (which I still hold) that when things are deemed hard, you just need to do more of them to learn how to make them less hard. So I struggled with my team through the discussions of "releasing takes too much time and is away from real work", with the support from our manager setting it a team goal tied to bonuses that we would turn our 4 day release to a 4 hour release.

Each release would see a little more automation. Each release would see a little more streamlining. We would find things that would be difficult (not impossible) to change and postpone those from focusing first on the low hanging fruit, never giving up on the ultimate goal: a touch of a button releasing to various environments.

A month ago, I could happily confirm that the 1st goal as it ended up being written down was achieved.
[Team Capability] Turn 4 day release to 4 hour release
We believe that ability to make our client releases with shorter duration will result in saved time in making multiple releases. We will know we have succeeded when team does not feel need to escalate release-making as a threat to features.
We also worked on another capability:
[Team Capability] Min 2 people can make client releases
We believe that having at least two people with skills, knowledge and accesses to make client releases will result in being able to make releases while one is sick. We will know we have succeeded when release happens without 1st key person present at office within same / similar timeframe.  
What next?

We have come to a point of bi-weekly releases, which is only taking us to the level I introduced decade ago. But building on that, the next things would be to figure out ways of not breaking the builds within the 2 week intervals, and that change takes me far away from just my own team, including changing the ways test automation supports our development.

There's still work on making the four hours into four minutes of work, and I look forward to stepping through that challenge.

Our very first production environment release was just done. With more environments in play, each 4 hours can easily grow into five times this, so that would be a next step to work on too.

So the vision I'm working for:
[Team Capability] Four-minute release throughout the environments
We believe that having a push-of-a-button release will result in us focusing more on valuable features and improvement for the user and our organization. We will know we have succeeded when releases happen on a daily basis as features / changes get introduced. 
Why would I, the tester, care for this?

I have people every now and then telling me this is not testing. But this fundamentally changes the testing I do. It enables me to test each change, isolate it and see its impacts all the way through production. It supports small, human-sized discussions on changes together in the teams and gives us an ultimate definition of done - production value over task completion.

It makes developers care about the feedback I give, and enabled the feedback to be more timely. And it makes way for the necessary amount of thinking and manual work to happen in both coding and testing so that what we deliver is top-notch without exerting too much effort into it.


Pair Testing with a 15-year-old

A few months back, I had the pleasure of working with a trainee at F-Secure. As usual in schools in Finland, there was a week of practice at work with the options of taking a job your school assigns you (I did mine at age of 15 in an elderly people home) or you can find one of your own. This young fellow found one of his own through parents, and I jumped on the opportunity to pair test with him.

At first, he did not have a working computer so it was natural for us to  get started with strong style pairing:
With an idea from my head to keyboard, it must go through someone else's hands (Llewellyn Falco)
He was my hands, as I was testing firewall. And in many ways he was a natural in this style of work. He would let me decide where to go and what to do, but speak back to me about his observations and ideas, extending what I could see and do all by myself. Most of the things we did together were things I would have done by myself. Only difference was the times of going to the whiteboard to model what we knew and had learned, where I guided him to navigate me in the ideas to document very much in the same strong style pairing. As the driver drawing, I would ask questions based on our shared testing experience when he would seem to miss a concept.

His ability to test grew fast. He learned to use the application. He learned to extend his exploration with test automation that existed and play with it to create the type of data we wanted.

My reward was to see him enjoy the work I love so much. His words on the end of our joint experience without me prompting still make me smile: "I now understand what testing is and would love to do more of it".

He joins us for a full month in June. I can't wait to pair up with him again.

Wednesday, May 24, 2017

Impact of Test Automation in my Everyday Worklife

I'm not particularly convinced of the testing our teams test automation does for us. The scenarios is automation are somewhat simple, yet take extensive time to run. They are *system tests* and I would very much prefer seeing more things around components the team is responsible for. System tests fail often for dependencies outside the team control.

I've been actively postponing the time of doing really something about it, and today I stopped to think about what existence of the minimal automation has meant for me.

The better test automation around here seem to find random crashes (with logs and dumps that enable fixing), but that is really not the case with what I'm seeing close.

The impact existence of test automation has had for my everyday work life is that I can see with a glimpse if the test systems are down so that I don't need to pay attention to installing regularly just to know it still installs.

So I stopped to think: has this really changed something for me, personally. It has. I feel a little less rushed with my routines. And I can appreciate that.

Tuesday, May 9, 2017

Bias for action


'Bias for Action'. That's a phrase I picked up ages ago, yet one that has been keenly on my mind for some time now.

It means (to me) that if I can choose planning and speculating vs. doing something, I should rather be doing something. It's in the work we do that we discover the work that needs doing.

There are things I feel need doing, and I notice myself trying to convince others in doing those over being alone in doing those. I notice being afraid of going in and starting the restructure of our test automation to a shape that would make more sense.

Without bias for action, I procrastinate. I plan. I try to figure out a way of communicating. I don't get anything done.

With bias for action, I make mistakes and learn. I make myself more vulnerable and work with my fears of inadequacy.

It's been such an important thing to remember: things don't change without changing them. And I can be a person to change things I feel strongly for.

Thursday, April 20, 2017

Dear Developer

Dear Developer,

I'm not sure if I should write you to thank you for how enthusiastically you welcome feedback on what you've been working on and how our system behaves, or if I should write you to ask you to understand that is what I do: provide you actionable feedback so that we can be more awesome together.

But at least I want to reach out to ask for you to make my job of helping you easier. Keep me posted on what you're doing and thinking, and I can help you crystallize what threats there might be to the value you're providing and find ways to work with you to have the information available when it is the most useful. What I do isn't magic (just as what you do isn't magic) but it's different. I'm happy to show you how I think well around a software system whenever you want to. Let's pair, just give me a hint and I make the time for you.

You've probably heard of unit tests, and you know how to get your own hands on the software you've just generated. You tested it yourself, you say. So why should you care about a second pair of eyes?

You might think of testing as confirming what ever you think you already know. But there's other information too: there are things you think you knew but were wrong. And there are things you just did not know to know, and spending time with what you've implemented will reveal that information. It could be revealed to you too, but having someone else there, a second pair of eyes, widens the perspectives available to you and can make the two of you together more productive.

Us tester tend to have this skill of hearing the software speak to us, and hinting on problems. We are also often equipped with an analytic mind to identify things you can change that might make a difference, and a patience to try various angles to seeing if things are as they should be.  We focus our energies a little differently.
 
When the software works and provides the value it is supposed to, you will be praised. And when it doesn't work, you'll be the one working late nights and stressing on  the fixes. Let us help you get to praise and avoid the stress of long nights. 

You'll rather know and prepare. That's what we're here for. To help you consider perspectives that are hard to keep track of when you're focused on getting the implementation right.

Thank you for being awesome. And being more awesome together with me.

     Maaret - a tester

Time bombs in products

My desk is covered with post-it notes of things that I'm processing, and today, I seem to have taken a liking to doodling pictures of little bombs. My artistic talent did not allow me to post one here, but just speaking about it lets you know what I think of. I think of things that could be considered time bombs in our products, and ways to better speak of them.

There's one easy and obvious category of time bombs while working in a security company, and that is vulnerabilities. These typically have a few different parts in their life. There's the time when no one knows of them (that we know of). Then there's the time when we know of them but other don't (that we know of). Then there's the time when someone other than us knows of them and we know they know. When that time arrives, it really no longer matters much if we knew before or not, but fixing commences, stopping everything else. And there's times when we know, and let others know as there is an external mitigation / monitoring that people could do to keep themselves safe. We work hard to fix things we know of, before others know of them because working without an external schedule pressure is just so much nicer. And it is really the right  thing to do. The right thing isn't always easy and I love the intensity of analysis and discussions vulnerability related information causes here. It reminds me of the other places where the vulnerabilities were time bombs we just closed eyes on, and even publishing them wouldn't make assessing them a priority without a customer escalation.

Security issues, however, are not the only time bombs we have. Other relevant bugs are the same too. And with other relevant bugs, the question of timing sometimes becomes harder. For things that are just as easy to fix while in production and while developing an increment, timing can become irrelevant. This is what a lot of the continuous deployment approaches rely on - fast fixing. Some of these bugs though, when found have already caused a significant damage. Half of a database is corrupted. Communication between client and server has become irrecoverable. Computer fails to start unless you know how to go in through bios and hack registries so that starting up is again possible. So bugs with impacts other than inconvenience are ones that can bring a business down or slow it to a halt.

There's also the time bombs of bugs that are just hard to fix. At some point, someone gets annoyed enough with a slow website, and you've known for years it's a major architectural change to fix that one.

A thing that seems common with time bombs is that they are missing good conversations. The good conversations tends to lead to the right direction on deciding which ones we really need to invest on, right now. And for those not now, what is the time for them?

And all of this after we've done all we can to avoid having any in the first place. 


Wednesday, April 19, 2017

Test Communication Grumpiness

I've been having the time of my life exploratory testing a new feature, one that I won't be writing details on. I have the time of my life because I feel this is what I'm meant to do as a tester. The product (and people doing it) are better because I exist.

It's not all fun and happy though. I really don't like the fact that yet again, the feedback that I'm delivering happens later than it could. Then again, as per ability, interest and knowledge to react to it, it feels very timely.

There's three main things on the "life of this feature". First it was programmed (and unit tested, and tested extensively by the developer). Then some system test automation was added to it. I'm involved in the third part of its life, exploring it to find out what it is and should be from another perspective.

As first and second parts were done, people were quick to communicate it was "done". And if the system test automation was more extensive than it is, it could actually be done. But it isn't.

The third part has revealed functionalities we seem to have but don't. Some we forgot to implement, as there was still an open question regarding them. It has revealed inconsistencies and dependencies. And in particular, it has revealed cases where the software as we implemented isn't just complicated enough for the problem it is supposed to be helping with.

I appreciate how openly people welcome the feedback, and how actively things get changed as the feedback emerges. But all of this still leaves me a little grumpy on how hard communication can be.

There are tasks that we know of, like knowing we need to implement a feature for it to work.
There are tasks that we know will tell us of the tasks we don't know of, like testing of feature.
And there are the tasks that we don't know of yet but they will  be there.

And we won't be done before we've addressed also the work we just can't plan for.

Wednesday, March 29, 2017

Test Planning Workshop has Changed

I work on a system with five immediate teams, and at least another ten I don't care to count due to organizational structures. We had a need of some test planning for the five immediate teams. So the usual happens: a calendar request to get people together for a test planning workshop.

I knew we had three major areas where programmer work is split in interesting (complicated) ways across the teams. I was pretty sure we'd easily see the testing each of us would do through the lenses of responding to whatever the programmers were doing. That is, if one of our programmers would create a component, we would test that component. But integrating those components with their neighbors and eventually into the overall flows of the system, that was no longer obvious. This is a problem I find that not all programmers in multi-team agile understand, and the testing of a component gets easily focused on whatever the public interface of the team's component is.

As the meeting started, I took a step back and looked at how the discussion emerged. First, there was a rough architectural picture drawn on the whiteboard. Then arrows emerged in explanation of comparing how the *test automation system* works before the changes we are now introducing - a little lesson of history to frame the discussion. And from there, we all together very organically talked on chains and pairs and split *implementation work* to teams.

No one mentioned exploratory testing. I didn't either. I could see some of it happening while creating the automation. I could see some of it not happening while creating the automation, but that I would rather have people focus on it after the automation existed, I could see some of it, the early parts of it as things I would personally do to figure out what I didn't yet even know to focus on as a task or a risk.

Thinking back 10 years on time before automation was useful and extensive, this same meeting happened in such a different way. We would agree on who leads each feature's testing effort, and whoever would lead would generate ways for the rest of us to participate in that shared activity.

These days, we first build the system to test the system, explore while building it and then explore some more. Before, we used to build a system of mainly exploration, and tracking the part that stays was more difficult.

The test automation system isn't perfect. But the artifact that we, the five teams, can all go to and see in action, changes the way we communicate on the basics.

The world of testing has changed. And it has changed for the better.

Tuesday, March 28, 2017

World-changing incrementalism

As many exploratory testers do, I keep going back to thinking about the role of programming in the field of testing. At this point of my career, I identify both as a tester and a developer and while I love exploratory testing, maintainable code comes close. I'm fascinated by collaboration and skills, and how we build these skills, realizing there are many paths to greatness.

I recognize that in my personal skills and professional growth path there have been things that really make me more proficient but also things that keep me engaged and committed. Pushing me to do things I don't self-opt-in is a great way of not keeping me engaged and committed, and I realize, in hindsight that code for a long time had that status for me.

Here's still an idea I believe in: it is good to specialize in the first five years, and generalize later on. And whether it is good or not, it is the reality of how people cope with learning things, taking a few at a time, practicing and getting better, having a foundation that sticks around when building more on it.

If it is true that we are in a profession that doubles in size every five years, it means that in a balanced group half of us have less than five years of experience. Instead of giving the same advice on career to everyone, I like to split my ideas of advice on how to grow to these two halfs: the ones coming in and getting started vs. the ones continuing to grow in contribution.

I'm also old enough to remember the times when I could not get to testing the code as it was created, but had to wait months before what we knew as a testing phase. And I know you don't need to be old at all to experience those projects, there's still plenty of those to go around. Thinking about it, I feel that some part of my strong feelings of choosing tester vs. developer early path clearly come from the fact that in that world of phases, it was even more impossible to survive without the specialization. Especially as a tester, with phases it was hard to time box a bit of manual and a bit of automation, as every change we were testing was something big.

Incremental development has changed my world a lot. For a small change, I can explore that change and its implications from a context of having years of history with that product. I can also add test automation around that change (unit, integration or system level, which ever suits best) and add to years of history with that product. I don't need a choice of either or, I can have both. Incremental gives me the possibility, that is greatly enhanced with the idea of me not being alone. Whatever testing I contribute in us realizing we need to do, there's the whole team to do it.

I can't go back and try doing things differently. So my advice for those who seek any is this: you can choose whatever you feel like choosing, the right path isn't obvious. We need teams that are complete in their perspectives, not individuals that are complete. Pick a slice, get great, improve. And pick more slices. Any slices. Never stop learning.

That's what matters. Learning.

Changing Change Aversiveness

"I want to change the automatic installations to hourly over the 4-hour period it has been before". I suspected that could cause a little bit of discussion.

"But it could be disruptive to ongoing testing", came the response. "But you could always do it manually", came a proposal for alternative way of doing things.

I see this dynamic all the time. I propose a change and meet a list of *but* responses. And at worst they end up with *it depends* as no solution is optimal for everyone.

In mob programming, we have been practicing the idea of saying yes more often. When multiple different ways of doing something are proposed, do all. Do the least prominent one first. And observe how each of the different ways of doing teaches us not only about what worked but what we really wanted. And how we will fight about abstract perceptions without actual experience, sometimes to the bitter end.

This dynamic isn't just about mob programming. I've ended up paying attention to how I respond in ways that make others feel unsafe in suggesting the changes, after I first noticed the pattern of me having to fight for change that should be welcomed.

Yes, and... 

To feel safe to suggest ideas, we need to feel that our ideas are accepted, even welcome. If all proposals are met with a list of "But...", you keep  hearing no when you should hear yes.

The rule of improv "Yes, and..." turns out to have a lot of practical value. Try taking whatever the others suggest and say your improvement proposal as a step forward, instead as a step blocking the suggestion.

Acknowledge the other's experience

When you hear a "But...", start to listen. Ask for examples. When you hear of their experiences and worries, acknowledge those instead of trying to counteract them. We worry for reasons. The reasons may be personal experiences, very old history or something that we really justifiably all should worry about. The perception to whoever is experiencing a worry is very real.

A lot of times I find that just acknowledging that the concern is real helps move beoynd the concern.

Experiment

Suggest to try things differently for a while. Promise to go back or try something different if this change doesn't work. And keep the promise. Take a timebox that gives and idea a fighting chance.

People tend to be more open to trying things out than making a commitment on how things will be done in the long term. 

Monday, March 27, 2017

The Myth of Automating without Exploring

I feel the need of calling out a mystical creature: a thinking tester who does not think. This creature is born because of *automation*. That somehow, because of the magic of automation, the smart, thinking tester dumbs down and forgets all other activities around and just writes mindless code.

This is what I feel I see when I see comparisons of what automation does to testing, most recently this one: Implication of Emphasis on Test Automation in CI.

To create test automation, one must explore. One must figure out what it is that we're automating, and how could we consistently check the same things again and again. And while one seeks for information for the purposes of automation, one tends to see problems in the design. Automation creation forces out focus in detail, and this focus in detail that comes naturally with automation sometimes needs a specific mechanism when freeform exploring. Or, the mechanism is the automation thinking mindset. 

I remember reading various experience reports of people explaining how all the problems their automation ever found were found while creating the automation. I've had that experience in various situations. I've missed bugs for choosing not to automate because the ways I chose to test drove my focus of detail to different areas or concerns. I've found bugs that leave my automated tests in "expected fail" state until things get fixed.

The discussion around automation is feeling weird. It's so black and white, so inhumane. Yet, at core of any great testing, automated or not, there is a smart person. It's the skills of that person that turn the activity into useful results. 

Only the worst of the automators I've met dismiss the bugs they find while building the automation. Saves them time, surely, but misses a relevant part of feedback they could be providing. 


A Regular Expression Drive-By

I was working in strong-style pair on my team's test automation code last week, to assess candidates to help us as consultants for a short timeframe of ramping up our new product capabilities. The mechanisms of "an idea from your head to the computer must go through someone else's hands" lends itself well for assessing both skills and collaboration. At first, I would navigate on the task I had selected - cleaning up some test automation code. But soon, I would hand the navigation over to my pair and be the hands writing the changes.

There was this one particular line of code that in both sessions caught my eye and was emphasized by the reaction of my pairs: "This should have a code comments on it", "Ehh, what does this do, I have no idea!". It was a regular expression verifying if a message should be parsed to passed or failed but the selection of what the sought for keyword was was by no means obvious.

I mentioned this out loud a few days later, just to seek for confirmation that instead of the proposed code comment, it should really just be captured in a convenience method that would have a helpful name. But as we talked on the specific example, we also realized that it would make sense to add a unit test on that regular expression to explain the logic just a bit more.

The unit test would start failing if for any reason the messages we used to decide on pass/fail would no longer be available, and would be more granular way of identifying where the problem was than reading the logs of the system test.

A regular expression drive-by made me realize we should unit test our system tests more. 

Friday, March 24, 2017

Find the knobs and turn them

"What happened at work?" is a discussion I get to have daily. And yesterday, I was geeking out on installing and configuring a Windows Server as a domain controller, just so that I would have one more route to put things on a list that our product was supposed to manage.

Instead of talking about the actual contents, the discussion quickly moved to meta through pointing out that a lot of my stories of what I do for work include finding this button, lever or a knob, and twisting, pushing, pulling even intentionally isolating it. I find things that give me access to things others don't pay attention.

"I'm sure a developer did not take two hours to set the server up just for this test", I exclaimed. And continued with "while I was setting this up, I found four other routes to tweak that list." It was clear to me that if there was anything interesting learned from the 1st route I was now working on, the four others would soon follow.

Think about it: this is what we do. We find the knobs of the software (and build those knobs to be available in the system around our software) just so that we see, in a timely fashion, what happens when they are turned.

It turns out you may find some cool bugs thinking like this.

From appreciation of shallow testing towards depth

So, Maaret Pyhäjärvi is an extraordinary exploratory tester. ... She took ApprovalTests as a test target. She's like "I want to exploratory test your ApprovalTests" and I'm like "Yeah, go for it", cause it's all written test first and its code I'm very proud of. And she destroyed it in like an hour and a half. She destroyed in in things I can't unit test. One of the things she pointed out right away was "Your documentation is horrible. You're using images that you can't even copy and paste the examples from". And I'm, like, "yeah, that's true". And then she's like "Look at the way you write this API, it's not discoverable". And that's a hard thing for me to deal with because for me, I know exactly where the API is. One of the things I constantly struggle with is beginner mindset. And it's so easy to lose that and then never appreciate it in  the beginning. You're like "no, idiot, your supposed to do it this way". So this idea that my names are not discoverable is not something I could unit test but she was able to point out right away. And after pointing it out, and sort of arguing a little bit, she did this thing where she... She did in a session. I attended the session, but everybody is doing a mob exploratory testing an now I'm watching like 10 people not being able to find a reporter. It's nothing like watching people use your product and not be able to talk to make you appreciate you've done it wrong. I was like "oh, this is so painful, I never want to see that again".

What I found is that it used to be the case that we would write code and it was horrible. It was buggy and just so full of problems. And there was so many bugs where what we intended to occur wasn't what was happening, so that all that testing was was checking that what the programmer intended what the code did. This is all we had time for. As we started doing unit testing and automated testing, and test first, those problems started to go away. So now what the code does is what we intend it to do. And then it turns out there is this entire another world of is what you intended what you want. And it turns out, that's still a remarkably complex world. So you don't want to spend your time fighting with what I intended is not what the code does, so you need the unit test for that. But we also need this much bigger world of is what I intended what I actually want. What are the unforeseen consequences of these rules. That starts moving to exploratory testing and monitoring. Which is effectively exploratory testing via your users. "
The story above a great story about how one programmer learned there was more to testers contributions that he could have seen. It's great hearing Llewellyn pass hints in a meetup to other programmers such as yesterday: "Your testers know of more bugs than what they tell you. Even though it feels they tell you a lot, they still know more. Ask them, don't just wait them to tell you."

Some of the emphasis in the above text are for adding more to the story.

1,5 Hours is Shallow Testing and Excludes Earlier Learning

While a tester can in "just hour and a half" get you to rewrite half of your API, there's more depth to that testing than just the work immediately visible. Surely, when I started testing ApprovalTests, I already knew what that was supposed to be for and the hours in the background getting familiarized count in what I could do. I had ideas on what a multi-language API in IDEs should be like, and out of my 1,5 hours, I still used half an hour on two research activities: I googled what a great API is like and I asked user perspective questions from Llewellyn to find out what he thinks ApprovalTests Approvals and Reporters do - collecting claims. 

With the claims in particular and consistency across languages taking into account language idiosyncrasies, I could do so much more with deep exploratory testing he has not yet seen. That's what I do for my developers at work.

Things You Can and Can't Unit Test For

While discoverability of an API in an IDE does not strike as an idea to unit test for, after you have that insight, it is something you can change your unit tests to include. Your unit tests wouldn't notice if the API turns again hard to but it would give you the updated control over what you now intended it to be. 

The reason I write of this is that I find that a lot of times when I find something through exploration, I have a tendency of telling myself that this insight couldn't be a unit tests because I found it in the system context. After an insight exists, we could do a lot more on turning those insights into smaller scale and avoid some of the pain at least I am experiencing through system level test automation. We need to understand better (through talking about it) what is the smallest possible scope to find particular problems. 

When Making a Point, Try Again

The story above hints on arguments over the API, that were much less of arguments than discussions on what is practical. Changing half of your API after you have thousands of users isn't exactly a picnic in the park and as a tester, I totally get that many organizations don't really care about that feedback on discoverability when it is timed wrong - get your testers involved before your users fix your world. 

I would believe I got my message through with Llewellyn already telling my experience. But surely, I do have a tendency of advocating for the bugs I care for, and getting an experience with your real users trying to use your software is a powerful advocation tool. 

As an exploratory tester, I could write a chapter about ways I've tried advocating for things that my devs don't react on, just to be sure we understand what we don't fix. Perhaps that's what I do next for my exploratory testing book on leanpub

Where Most of the Software World Is

Getting to work with developers who do test-driven development and test with the commitment Llewellyn shows is rare. When in the second part of the exerpt he talks about the testing for what programmer intended for, I can't help but realize that out of the hundreds of developers I've had the pleasure working with, I can count the ones who do TDD with one hands fingers. 

Let's face it. The better of us unit test at all. And even that is not a majority still. And generally, most of us still suck at unit testing. Or even if not personally, we know a friend who does. 

When I explore, it is a rare treat to have something where the software does *even* what the programmer intended to. So I start often with understanding that intent through exploring the happy, expected paths. I first have empathy of what the world could be if the programmer was right in what he knew today while implementing this. 
But even the TDD-ers, I approach with scepticism.  Llewellyn meetup talk yesterday introduced Asserts vs. Approvals and he had this slide comparing someone's Assert-TDD end result to his Approvals-TDD end result. 
He pointed out that the tests on the left (Asserts-TDD) missed a bug in the code for value 4 being represented as IIII, whereas the test on the right (Approvals-TDD) found that missed bug run against the other's code. 

As a tester, I would have been likely to check how the developer tested this. My life would have been a lot simpler reading the Approvals-file with formatting and scenarios collected. But even if I did not read the code, I would be likely to have gone to sample values that I find likely to break. 

What you usually get in TDD is your best insight. And our shared insight, together, tends to be stronger than yours alone. I tend to generate different insight when my head is not buried in the code.





Wednesday, March 15, 2017

Don't pay to speak, get paid to speak

I strongly believe the world of tech conferences needs to change, and the change I call for is that whoever the conference organizers deem good enough to step on their podium to speak, should not have to pay to speak. And when I talk about paying to speak, I speak of expenses.

In case of paying for travel expenses, and encouraging the cheapest possible travel, there's a second step. When booking early, pay back early. Don't use your speakers as a bank and pay back after the conference.

I work towards changing these two.

Other people ask for more, and I would love to join them. They ask to be paid to speak. They ask for the time they put on preparing to be compensated. And since the work you do is not preparing the talk, it's becoming the person that gets on that stage, the speaking fees should be relevant.

In paying, a big injustice is when some people get paid differently than others. The injustice of it just gets bigger when conferences give replies like this on paying some but not others.
As a conference organizer, I want to share my perspective.

I set up a conference that:

  1. Pays travel expenses of the speakers. All the speakers.
  2. Profit shares with all the speakers. Keynotes get 5* what a 30-minute slot speaker gets. 
The second happens only if there is profit. And I work hard to make profit. In 2017, I failed. I made losses. 

If I would have committed early on to paying my speakers, I would have lost more than 20k that I lost now. This loss is insignificant as it is an investment into a great conference (and great it was!) and an investment in making things right in the world of speakers. But imagine if I had a thousand euros to pay to each of my speakers, I would be down 30k more. 

What I failed in was marketing. Getting people to learn about the conference. Yet, I feel that whoever came are the right people. 


To make marketing easier, famous names help. Some famous names are willing to risk it to not be paid for their time, and I'm very grateful for that. But others have a fixed price range, paid in advance. When as an organizer you want to invite one like that, you fill the other similar slots with people who are not the same: people who don't insist on being paid fairly. But lying about it is just stupid. The speakers talk. And should talk more in the future.

As an organizer, I rather leave out the superstars if the same fees principle is a problem for them. And honestly, it is a problem for many of our tech superstars. But things change with conferences only if we change them. One conference at a time. 

Meeting is not just a meeting

We're sitting in a status / coordination meeting, and I know I don't look particularly happy to be there. The meeting, scheduled at 3pm has been lurking on my mind the whole day and for the last hour before it, I recognize I have actively avoided doing anything I really should be doing. And what I really should be doing is deep thinking while testing. I feel there must be something wrong with me for not being able to start when my insides are seeing the inevitable interruption looming.

It's not just the inconvenient timing at the seeming end part of my day that has negative impacts on my focus. It's also the fact that I know the meeting is, in my perspective, useless and yet I'm forced there trying to mask most of my dislike. It drains my energy even further.

In the ten years of looking at agile in practice, one of my main lessons has been that planning the work is not the work. I can plan to write a hundred blog posts, and yet I have not written any of them except for a title. I can plan to test, yet the plan never survives the contact with the real software that whispers and lures me into some cool bugs and information we were completely unaware of while planning.

I love continuous planning, but that planning does not happen in workshops or meetings scheduled for planning. It happens as we are doing the work and learning. And sitting in a team room with insightful other software developers, any moment for planning is almost as good as any other. The unscheduled "meeting" over a whiteboard is less of an interruption than the one looming in my schedules.

I know how I feel, and I've spent a fair deal of time understanding those feelings. I know how to mask those feelings too, to appear obedient and, as a project manager put it, "approach things practically". But the real practice for me is aspiring to be better, and to accommodate people with different feelings around same tasks.

Planning is not doing the work. But it does create the same feeling of accomplishment. When you visualize the work, you start imagining the work is done. And if you happen to be a manager who sits through meetings day in and out, the disruptiveness of a meeting in schedule isn't as much as it is when you are doing the work.

I used to be a tester. Then I became too good to test, and took the role of a manager. I was still good, just paying attention to different things. But the big learning for me came when I realized that to have self-organized teams as we introduced agile a decade ago in the organization, I was a hindrance. My usefulness as a manager stopped the people from doing the work I was doing. Stepping down and announcing the test manager role gone and just teaching all the work I had been doing to teams was the best choice I've done.

And it made me a tester again. But this time around, I don't expect a manager to be there. I expect there's a little manager in every one of us, and the manager in others help me manage both the doer and the manager in me.

The two roles were different for me. And awareness of that keeps me wary of meetings.

Monday, March 13, 2017

A Mob Testing Experience

During my 6 months at the new job, I've managed to do Mob Testing a few times. Basically the idea is that whenever I sink into a new feature that needs exploring, I invite others to join me for the exploration for a limited time. I've been fascinated with the the perspectives and observations of the other testers I've had join me, but these always leave me wanting after the Mob Testing experiences I had at my earlier place of work. There not only testers joined (well,  there were no testers other than myself) but we did the tasks together with the whole team, having programmers join in.

There's a big difference on if you're mob testing amongst testers (or quality engineers as we call them) or if you're including your teams developers and ever product owners. And the big difference comes from having people who need to receive the feedback testing is providing sharing the work.

With 6 months approaching, I'm starting to see that my no-so-subtle hints on a regular basis are not taking adapting mob testing / programming further. But it became funny at a point I taught developers from another organization who started off with the practice, and only through their positive reports someone relevant enough to push people to try it took initiative. There's an infamous old saying of no one ever being a prophet on their own land, and that kept creeping up to my thoughts - I became part of furniture, "always been here" surprisingly quickly. And I don't push people to do things they don't opt in to.

But finally last week's Wednesday, while my own personal team did not opt in, the team next door did and invited me to join their experience. With two application developers, two test developers and two all-around test specialists, we took the time to mob for about 5 hours during the day.

The task we were working on was a performance testing task, and the application developers were not in their strong area. We worked on extending an existing piece of code to a specific purpose, and the idea of the task was available to start our session. There were a few particularly interesting dynamics.

When in disagreement, do the less likely one first

About half an hour into our mobbing, we had a disagreement on how we would approach the extending of the code. We just did not disagree what would  be the right thing to do as the next step. The two of us who were familiar with what the goal of what we were doing had one perspective. And another suggested doing things differently, in a way that in the moment felt it made little sense to us.

I realized that were were quickly going into discussion mode, convincing the other of what the right thing was - at a time we really knew the least. The other suggestion might not sound like the best idea, so we played a common rule to beginning mobs: "Do the less likely first, do both". Without continuing the discussion, we just adjusted the next step to be one that the other, in minority, felt strongly enough to voice.

And it turned out to be a good thing to do in a group. As it was done, the work unfolded in a way that did not leave us missing the other option.

Keep rotating

Between hours 2-3, two of the six mob participants needed to step out into another meeting. I was one of these two. For first two hours, we had rotated on a four minute timer and pushed the rule of having a designated navigator. As I came back from the meeting, the rotation had fallen off as the mob had found relevant bugs in performance and had two other people join in as lurkers on the side of the table, monitoring breaking services in more detail. The lurkers did not join the mob, but also the work got split so that the common thread started to hide.

Bringing back rotation brought back the group thread. Yet it was clear that the power dynamic had shifted. The more quiet ones were more quiet and we could use some work on dominating personalities.

But one things I loved to observe on the more quiet ones. They aced listening and it showed up as timely contributions when no one else knew where to head right now.

Oh Style

The group ended up on one computer with one IDE in the morning and another computer with another IDE in the afternoon. Keyboard shortcuts would fly around, and made different IDEs obvious.

On the order of doing things, there was more disagreement than we could experience and go through in one day. Strong opinions of "my way is the best way" would be best resolved doing similar tasks in different ways, and then having a retrospective discussion of the shared experiences.

And observing the group clean up code to be ready to check in was enchanting. It was enlightening to look at group who have "common rules" to not have common rules after all. Mobbing would really help out figuring the code styles over the discussions around pull requests.




Thursday, March 9, 2017

A Simple Superpower

There was a problem and I could tell by the discussions in the hallways. I would hear from one side that the test automation doesn't work, and it will be perhaps fixed later - but uncertain. And I would hear from the other side that there's a lot to do, with suspects of not really having time to address anything outside immediate attention.


I don't have a solution any more than anyone else. But I seem to have something of a superpower: I walk the right people into one space to have a discussion around it. And while the discussion is ongoing, I paraphrase what has been said to check if I heard right. I ask questions, and make sure quiet does not get interpreted as agreement.

There's magic in (smart) people getting together to solve things. But seems that bringing people together sometimes is a simple superpower. Dare to take room for face to face communication. If two is enough to address something, great. But recognizing when three is not a crowd seems to provide a lot of benefits.

If you can use 15 minutes in complaining and uncertainty, how about walking around to have a practical solution-driven discussion. It's only out of our reach is we choose so.

Tuesday, March 7, 2017

Testing in a multi-team setting

There's a lovely theory of feature teams - groups of people working well together, picking up an end-to-end feature, working on a shared code base and as the feature is done (as in done done done as many times done as you can imagine) there's the feature and tests to make sure things stay as they were left off .

Add multiple teams, and the lovely theory starts shaking. But add multiple teams over multiple business lines, and the shakiness is more visible.

Experiencing this as a tester makes it obvious. I work on one business line and the other business line is adding all these amazing features. If the added feature was also built and tested from my business line's perspective, it would be ideal.

The ideal breaks on a few things:
  • lack of awareness of what the other business line is expecting and needing, and in particular, that some of the stuff (unknown unknowns) tend to only be found when exploratory testing
  • lack of skill on exploratory testing to do anything beyond "requirements" or "story"
  • team level preference to create test automation code only to match whatever features they are adding
I've been looking at what I do and I'm starting to see a pattern in how I think differently than most people (read: programmers) in my team. When I look at the work, I see two rough boxes. There's the feedback that I provide for the lovely programmers in my team (testing our changes / components) and there's the feedback I provide for the delightful programmers in other teams (testing their changes in product / system context).

It would be so much easier if in the team everyone shared a scope, but this division of "I test our stuff and other teams' stuff" gets very clearly distinguished when seeking for someone to fix what I found. And I find myself running around the hallways meeting people from the other teams, feeling lucky if my feedback was timely and thus a fix will emerge immediately. More often than not, it isn't timely and I get to enjoy managing a very traditional bug backlog.

Features teams that can and do think in scope of systems (over product lines) would help. But in a complex world, getting all the information together may be somewhat of a challenge.

Minimum requirement though: the test automation should be timely and thus available for whatever the team is that is now making (potentially breaking) changes without a human messenger in the chain. 

Thursday, March 2, 2017

The Awesome Flatness of Teams

For a long time, I've known that benchmarking our practices with other companies is a great way of mutual learning. But a lot of times these benchmarks teach me things that I never anticipated. Today was one of these and I wanted to share a little story.

Today, I found myself sitting on Skype facing three people just as agreed. One of the three introduced themselves as "just a quality engineer", whereas the others had more flashy titles. I also introduced as "just a quality engineer". Turns out those words have fascinated me since.

The discussion lead me to realize I have yet really not given much credit to how different from most places out team structure is. Our teams consist of people hired as "software engineers" and "quality engineers" and there's somewhat of a history and rule of thumb on how many of each type you would look for in a team. We share the same managers.

When you grow in a technical role, you move to senior, lead and principal in the same family of roles. And usually the growing means changes in the scope of what you contribute on, still as "just a member of a team".

As a lead quality engineer, I'm not a manager. I'm a member of a team, where I test with my team and help us build forward our abilities to test. With seniority, I work a lot cross-team figuring out how my team could help others improve and improve itself. I volunteer to take tasks that drive our future to a better state. I'm aware of what my team's immediate short term goal is, but also look into finding my contribution to the organization's long term goals.

Our teams have no scrum masters. The product owners work on priorities, clarifications and are a lovely collaborator for our teams. I'm not allocated a technical (quality engineering) leadership, I just step up to it. Just like the fellows next to me.

So I'm "just a tester", as much as anyone ever is just anything. But much of my power comes from the fact that there's no one who is anything more. Everyone steps up. And it's kind of amazing. 

Wednesday, March 1, 2017

Seeing symmetry and consistency

Morning at office starts off with news of relevant discussions that took place while I was gone. So I find myself standing next to a whiteboard, with a messy picture of scribbled boxes, arrows, acronyms. And naturally none of it would make sense without a guide.

But with a guide, I quickly pick up what this is about. A new box is introduced. Number of arrows is minimized. The new box has new technology, and I ask some questions to compare and contrast that to the other technologies we're using to figure out if there's a risk I'd raise right now.

I also see symmetry. There's boxes for similar yet different purposes. Pointing out the symmetry as a thing that makes sense from testing perspective (I know what to test on the new thing, as it is symmetrical to the old thing) gets approving nods.

I end up not raising up risks, but complimenting the choices for symmetry and choices of leaving boxes without changes that I was expecting they might be changing simultaneously just because we can.

There's hope for incremental development.

Tuesday, February 28, 2017

The Lying Developers

The title is a bit clickbait-y, right? But I can't help but directly addressing something from UKStar Conference and a talk I was not at, summarized in a tweet:
As a tester, the services I provide are not panacea for all things wrong with the world. I provide information, usually with primary emphasis on the product we are building with an empirical emphasis. Being an all around lie detector in a world does not strike me as the job I signed up for. Only some of the lies are my specialty, and I would claim that me being "technical" isn't the core type of lie (I prefer illusion) that I'm out to detect.

If a developer tells me something cannot be fixed (and that is a lie), there are other developers to pick up that lie. And if they all lie on that together, I need a third party developer to find a way to turn that misconception into a learning of how it is possible to do after all. I don't have to be able to do it myself, but I need to understand when *impossible* is *unacceptable*. And that isn't technical, that is understanding the business domain.

If a developer tells me something is done when it isn't, the best lie detector isn't going and reading the code. Surely, the code might give me hints of completely missing implementation or bunch of todo-tags, but trying out the functionality reveals often that and *more*. Why would we otherwise keep finding bugs when we patiently go through the flows that have been peer reviewed in pull requests?

Back in the days, I had a developer who intentionally left severe issues in the code he handed to testing to "see if we notice". Well, we did.

And in general, I'm happy to realize that is as close to systematic lying I feel I have needed to come to.

Conflicting belief systems are not a lie. And testers are not a lie detector, we have enough work on us without even hinting on the idea that we would intentionally be fooling each other.

There's better reasons to be a little technical than a lying developer fallacy.



Monday, February 27, 2017

A chat about banks

After a talk on mob testing and programming, someone approached me with a question.

"I'm working with big banks. They would never allow a group to work this way. Is there anything you have to say to this?"

Let's first clarify. It's really not my business to say if you should or should not mob. What I do is sharing that against all my personal beliefs, it has been a great experience for me. I would not have had the great experience without doing a thing I did not believe in. Go read back my blog on my doubts, how I felt it's the programmer's conspiracy to make testers vanish, and how I later learned that where I was different was more valuable in a timely manner in a mob.

But the problem with big banks as such is that there are people who are probably not willing to give this a chance. Most likely you're even a contractor, and proposing this adds another layer: how about you pay us for five people doing the "work of one person". Except it isn't work of one person. It's the five people's work done in small pieces so that whatever comes out in the end does not need to be rewritten immediately and then again in a few months.

Here's a story I shared. I once got a chance to see a bank in distress, they had production problems and were suffering big time. The software was already in production. And as there was a crisis, they did what any smart organization does: they brought together the right people, and instead of letting them work separately, they put them all on the same problem, into the same room. The main difference to mobbing was that they did not really try to work on one computer. But a lot of times, solving the most pressing problem, that is exactly what they ended up doing.

For the crisis time, it was a non-issue financially to bring together 15 people to solve the crisis, using long hours. But as soon as the crisis was solved, they again dismantled their most effective way of developing. The production problems were a big hit for reputation as well as financially. I bet the teams could have spent some time on working tightly together before the problem surfaced. But at that time, it did not feel necessary because wishful thinking is strong.

We keep believing we can do the same or good work individually one by one. But learning and building on each other tends to be important in software development.

Sure, I love showing the lists of all the bugs developers missed. The project managers don't love me for showing that too late. If likes of me could use a mechanism like mobbing to change this dynamic, wouldn't that be awesome?

Friday, February 24, 2017

Theories of Error

Some days it bothers me that I feel testers focus more on actions while testing than thinking about the underlying motivations of why  they test the way they do. Since I was thinking about this all of my way to office, I need to write about a few examples.

In a conference few weeks ago, I was following a session where some testing happened on stage, and the presenter had a great connection speaking back and forth with the audience on ideas. The software tested was a basic tasks tool, running on a local machine saving stuff into what ever internal format the thing had. And while discussing ideas with the audience, someone from the audience suggested testing SQL injection types of inputs.

The idea of injection is to enter code through input fields to see if the inputs are cleaned up or if whatever you give goes through as such. SQL in particular would be relevant if there was a database in the application, and is a popular quick attack among testers.

However, this application wasn't built on a database. Doing a particular action wouldn't connect with making sense of testing this way unless there was a bit more of a story around it. As the discussion between the audience member and the presenter remained puzzled, I volunteered an idea of connecting that together with an export functionality, if there was one and assessing the possible error from that perspective. A theory of error was needed for the idea to make sense.

Another example I keep coming back to is automation running a basic test again and again. There has been the habit of running the same basic test on schedule (frequently) because identifying all the triggers of change in a complex environment is a complex task. But again, there should be a theory of error.

I've been volunteered a number of rationale for running automation this way:
  • The basic thing is basic functionality and we just want to see it always stays in action
  • If the other things around it wouldn't cause as much false alarms in this test, it would actually be cheap to run it and I wouldn't have to care that it does not really provide information most of the time
  • When run enough times, timing-related issues get revealed and with repetition, we get out the 1/10000 crash dump that enables us to fix crashes
I sort of appreciate the last one, as it has an actual theory of error. The first two sound most of the time like sloppy explanations.

So I keep thinking: how often can we articulate why we are testing the way we are? Do we have an underlying theory of error we can share and if we articulated it better, would that change the way we test? 

Tuesday, February 21, 2017

It's all in perspective - virtual images for test automation use

I seem to fluctuate between two perspective to test automation that I get to witness. On some days (most) I find myself really frustrated with how much effort can go into such a little amount of testing. On other days, I find the platforms built very impressive even if the focus of what we test could still improve. And in reflection to how others are doing, I lower my standard and expectation for today, allowing myself to feel very happy and proud of what people have accomplished.

The piece that I'm in awe today is the operating system provisioning system that is in the heart of the way test automation is done here. And I just learned we have open sourced (yet apparently publicized very little) the tooling for this: https://github.com/F-Secure/dvmps

Just a high level view: imagine spawning 10 000 virtual machine for test automation use on a daily basis, with each running some set of tests. It takes just seconds to have a new machine up and running, and I often find myself tempted to use on of the machines for test automation, as the manual testing reserved images wait times are calculated in minutes.

With the thought of perspectives, I go for doing a little more research on how others do this. If you're working on scales like this, would love to benchmark experiences.

Friday, February 17, 2017

Testing by Intent

In programming, there's a concept called Programming by Intent. Paraphrasing on how I perceive the concept: it is helpful to not hold big things in your head but to outline intent that then drives implementation.

Intent in programming becomes particularly relevant when you try to pair or mob. If one of the group holds a vision of a set of variables and their relations just in their head, it makes it next to impossible for another member of the group to catch the ball and continue where the previous person left off.

With experiences in TDD and mob programming, it has grown very evident that making intent visible is useful. Working in a mob when you go to whiteboard with an example, turn that into English (and refactor the English), then turn it into test and then create the code that makes the test pass, the work just flows. Or actually, the being stuck in the flow happens more around discussions on the whiteboard.

In exploratory testing, I find that those of us who practiced it more intensely tend to inherently have a little better structure for our intent. But as I've been mob testing, I find that still we suck at sharing that intent. We don't have exactly the same mechanisms as TDD introduces to programming work, and with exploratory testing, we want to opt to the sidetracks that provide serendipity. But in a way that helps us track where we were, and share that idea of where we are in  the team.

The theme of testing by intent was my special focus in looking at a group mobbing on my exploratory testing course this week. I had an amazing group: mostly people with 20+ years in testing. One test automator - developer with solid testing understanding. And one newbie to testing. All super collaborative, nice and helpful.

I experimented with ways to improve intent and found that:
  • for exploring, shorter rotation forces the group to formulate clearer intent
  • explaining the concept of intent helped the group define their intent better, charters as we used them were too loose to keep the group on track of their intent
  • explicitly giving the group (by example) mechanisms of offloading sidetracks to go back to later helped the focus
  • when seeking deep testing of small area, needed strict facilitation to not allow people to leave undone work and seek other areas - inclination to be shallow 
There's clearly more to do in teaching people how to do it. The stories of what we are testing and why we are testing it this way are still very hard to voice for so many people.

Then again, it took me a long deliberate practice to build up my self-management skills. And yet, there's more work to do.