Thursday, September 21, 2017

What makes a test automation expert?

I was part of a working group that created an article called 125 Awesome Testers You Should Keep Your Eye on Always. It may not be obvious, but that list is a response to another article called 51 automated testing Experts You Should Keep Your Eye on Always. That list had only four women (at least it had four women!) and let me tell you a big public secret:
It is not because there aren't many awesome women in automation. It is because people don't look around and pay attention.
I could have many different criteria on what makes a test automation expert:
  • Speaks about test automation in public (conferences, articles) in a way that others find valuable
  • Does epic stuff on making automation work out and do real testing
  • Is identified as a creator of a test automation framework or library
  • Speaks only of automation and never in a manner that addresses its limits
The 125 awesome testers list does not identify automation separately, because I find that most people contribute to test automation in a significant way. Not all of people in either one of those lists have created an open source tool of their own. Not all people on either one of those lists write test automation code as their main thing.

We can be awesome at automation in so many ways. Writing code alone in a corner is not the only way. Many of us work in teams that collaborate: pair, or even mob. Coding is not the only way to do automation.
  • Delivering insights that are directly transferable to useful test automation is a way of doing automation. 
  • Working on the automation architecture, defining what we share is a way of doing automation.
  • Helping see what we've done through lenses of value in testing is a way of doing automation. 
  • Reading code without writing a line and commenting on what gets tested is a way of doing automation. 
  • Pairing and mobbing are ways of doing automation.
We don't say coding is all there is to application development, why would coding be all there is to  test automation development?
There's been a particular experience that has shaped my experience around this a lot, which is working with mob programming.  After programming with 14 different programming languages, I still identified as a non-programmer because my interests were wider. I actively forgot the experience I had, and downplayed it for decades. What changes me was seeing people who are programmers in action. I did not change because I started coding more. I changed because I started seeing that everyone codes so little. 

The image below is from a presentation of Anssi Lehtelä, a fellow tester in Finland who has also now two years of mob programming with his team under his belt. A core insight I find we share is that in coding, there is surprisingly little of coding. It's thinking and discussions. And that's what we've always been great at too! And don't forget googling - they google like crazy!

Lists tell you who the list maker follows. Check if you have even a possibility to recognize the awesome women in automation using on your twitter feed. It can be brutal. Mine is 53 % women. In the numbers I can follow, there's easily a brilliant, inspirational woman to match every single man. In any topic, including automation. Start hearing more voices.

Monday, September 18, 2017

Announcing an Awesome Conference - European Testing Conference 2018

TL;DR: European Testing Conference 2018 in Amsterdam February 19-20. Be there! 

Two months of Skype calls with 120 people submitting to European Testing Conference 2018 in Amsterdam has now transformed into a program. We're delighted to announce people you get to hear from, and topics you get to learn in the 2018 conference edition! Each one of these have been hand-picked for practical applicability and diversity of topics and experiences in a process of pair-interview. Thank you for the awesome selection team of 2018: Maaret Pyhäjärvi, Franziska Sauerwein, Julia Duran and Llewellyn Falco.

We have four keynotes for you balancing testing as testers and programmers know it, cultivating cross-learning:
  • Gojko Adzic will share on Painless Visual Testing
  • Lanette Creamer teaches us on how to Test Like a Cat
  • Jessica Kerr gives the programmer perspective with Coding is the easy part - Software Development is Mostly Testing
  • Zeger van Hese Power of Doubt - Becoming a Software Sceptic
With practical lessons in mind, we reserve 90 minute sessions for the following hands-on workshops you get to choose to participate two, as we repeat the sessions twice during the conference:
  • Lisa Crispin and Abby Bangser teach on Pipelines as Products Path to Production
  • Seb Rose and Gaspar Nagy teach  on Writing Better BDD Scenarios
  • Amber Race teaches on Exploratory Testing of REST APIs
  • Vernon Richards teaches on Scripted and Non-Scripted Testing
  • Alina Ionescu and Camil Braden teach on Use of Docker Containers
While workshops get your hands into learning, demo talks give you a view into looking someone experienced in doing something you would want to mimic. We wanted to do three of these side by side, but added an organizer bonus talk on something we felt strongly on. Our selection of Demo talks is:
  • Alexandra Schladebeck lets you see Exploratory Testing in Action
  • Dan Gilkerson shows you how to use Glance in making your GUI test code simpler and cleaner
  • Matthew Butt shows how to Unit/Integration Test Things that Seem Hard to Test
  • Llewellyn Falco builds a bridge for more complicated test oracles sharing on Property-Based Testing
Each of our normal talks introduces an actionable idea you can take back to your work. Our selection of these is:
  • Lynoure Braakman shared on Test Driven Development with Art of Minimal Test
  • Lisi Hocke and Toyer Mamoojee share on Finding a Learning Partner in Borderless Test Community
  • Desmond Delissen shares on a growth story of Two Rounds of Test Automation Frameworks
  • Linda Roy shares on API Testing Heuristics to teach Developers Better Testing
  • Pooja Shah introduces Building Alice, a Chat Bot and a Test Team mate
  • Amit Wertheimer teaches Structure of Test Automation Beyond just Page-Objects
  • Emily Bache shares on Testing on a Microservices Architecture
  • Ron Werner gets you into Mobile Crowdsourcing Experience
  • Mirjana Kolarov shares on Monitoring in Production 
  • Maaret Pyhäjärvi teaches How to Test A Text Field
In addition to all this, there's three collaborative sessions where everyone is a speaker. First there's a Speed Meet, where you  get to pick up topics of interest from others in fast rotation and make connections already before the first lunch. Later, there is a Lean Coffee which gives you a chance for deep discussions on testing and development topics of interest to the group you're discussing with. Finally, there's an Open Space where literally everyone can be a speaker, and bring out the topics and sessions we did not include in the program or where you want to deepen your understanding.

European Testing Conference is different. Don't miss out on the experience. Get your tickets now from http:/ 

Saturday, September 16, 2017

How Would You Test a Text Field?

I've been doing tester interviews recently. I don't feel fully in control there as there's an established way of asking things that is more chatty than actionable, and my bias for action is increasing. I'm not worried that we hired the wrong people, quite the opposite. But I am worried we did not hire all the right people, and some people would shine better given a chance of doing instead of talking. 

One of the questions we've been using where it is easy to make step from theory to practice is How would you test a text field? I asked it in all, around a whiteboard when not on my private computer with all sorts of practice exercises. And I realized that the exercise tells a lot more when done on a practice exercise.

In the basic format, the question talks of how people think of testing and how they generate ideas. The basic format as I'm categorizing things here is heavily based on years of thinking and observation by two of my amazing colleagues at F-Secure Tuula Posti and Petri Kuikka, and originally inspired by discussions on some of the online forums some decades ago. 

Shallow examples without labels - wannabe testers

There's a group of people who want to become testers but yet have little idea of what they're into, and they usually tend to go for shallow examples without labels. 

They would typically give a few examples of values, without any explanation of why that value is of relevance in their mind: mentioning things like text, numbers and special characters. They would often try showing their knowledge by saying that individual text fields should be tested in unit testing, and suggest easy automation without explaining anything else on how that automation could be done. They might go talking about hardware requirements, just to show they are aware of environment but go too far in their idea of what is connected. They might jump into talking about writing all this into test cases so that they can plan and execute separately, and generate metrics on how many things they tried. They might suggest this is a really big task and suggest to set up a project with several people around it. And they would have a strong predefined idea of their own of what the text field looks like on screen, like just showing text. 

Seeing the world around a text box - functional testers

This group of people have been testers and caught up some of the ideas and lingo, but also often over reliance on one way of doing things. They usually see there's more than entering text to a text box that could go wrong (pressing the button, trying enter to send the text) and talk of user interface more than just the examples. They can quickly list categories of examples, but also stop that list quit quickly as if it was irrelevant question. They may mention a more varied set of ideas, and list alphabetic, numeric, special characters, double-byte characters, filling up the field with long text, making the field empty, copy-pasting to the field, trying to figure out the length of the field, erasing, fitting text into the visible box vs. scrolling, and suggest code snippets of HTML or SQL, the go to answer for security. They've learned there's many things you can input, and not just basic input into the field, but it also has dimensions. 

This group of people often wants to show the depth of their existing experience by moving the question away from what it is (the text field) to processes and emphasize experiences around how relevant it is to report to developers through bug reports, how they may not fix things correctly and how a lot of time goes into retesting and regression testing. 

Tricks in the bag come with labels - more experienced functional testers

This group of testers have been looking around enough to realize that there are labels for all of the examples others just list. They start talking of equivalence partitioning and boundary values, testing positive and negative scenarios and can list a lot of different values and even say why they consider they're different. When the list starts growing, they start pointing out that priority matters and not everything can be tested, and may even approach the idea of asking why would anyone care of this text field, where is it? But the question isn't the first  thing, the mechanic of possible values is.  They prioritization focus takes them to address use of time into testing it, and they question if it is valuable enough to be tested more. Their approach is more diversified and they often are aware that some of this stuff could be tested on unit level and others require it integrated. They may even ask if seeing the code is available. And when they want to enter HTML and SQL, they frame those not just as inputs but as ideas around security testing. The answer can end up long, and show off quite much of knowledge. And they often mention they would talk to people to get more, and that different stakeholders may have different ideas. 

Question askers - experienced and brave

There's a group who seems to know more even though they show less. This group realizes that testing is a lot about asking questions, and mechanistic approach of listing values is not going to be what it takes to succeed. They answer back with questions, and want to understand typically the user domain but at best also the technical solution. They question everything, starting with their understanding of the problem at hand. What are they assuming, and can that be assumed? When not given a context of where the text field is, they may show a few clearly different ones to be able to highlight their choices. Or if the information isn't given, they try to figure out ways of getting to that information. 

The small group I had together started just with brainstorming the answer. But this level wasn't where we left of. 

After the listing of ideas (and assumptions, there was a lot of that), I opened a web page on my computer with a text field and an ok button and had the group mob to explore, asking them to apply their ideas on this. Many of the things they mentioned in the listing exercise just before immediately got dropped - the piece of software and possibility to use it took people with it.

The three exercises

The first exercise was a trick exercise. I had just had them spend 10 minutes thinking how they would test, and mostly they had not thought about the actual functionality associated with the text field. Facing one, they started entering values and looking at output. Over time, they came up with theories but did not follow up testing those and got quite confused. The application's text field had no functionality, only the button had. After a while, they realized to go into dev tools and the code. And were still confused with what the application did. And with a few rounds of three minutes each on the keyboard, I had us move on to the next example. 

The second exercise was text box in the context of a fairly simple editor application, but one where focusing on the text box alone without the functions immediately connected to the text box (unit test perspective) would miss a lot of information. The group was strong on ideas, but weaker on execution. When giving a value, what a tester has to do is to stop (very shortly) and look at what they learned. The learning wasn't articulated. They missed things that went wrong. Things where to me, an experienced exploratory tester, the application is almost shouting to tell how it is broken. But they also found things I did not remember, like the fact that copy pasting did not work. With hints and guidance through questions, I got them to realize where the text box was connected (software tends to save stuff somewhere) and eventually we were able to understand what we could do with the application and what with the file it connects to. We generated ideas around automation, not through the GUI but the file and discussed what kind of things that would enable us to test.  When asked to draw a conceptual picture of relevant pieces, they did good. There was more connections to be found, but that takes either a lot of practice on exploring or more time to learning layers. 

Again with the second exercise, I was left puzzled on what I observed. They had a lot of ideas as a group on what to try, but less of discipline in trying that out of following what they had tried. While they could talk of equivalence partitioning or boundaries, their actual approaches on thinking what values are equivalent and learning more as they used the application left something to hope for. The sources of actual data were interesting to see, "I want a long text" ended up as something they could measure but unawareness immediately of an online tool that would help with that. They knew some existed but did not go to get those. It could have been a priority call, but they also did not talk about doing a priority call. When the application revealed new functionality, I was making a mental note of new features of the text box I should test. And when that functionality (ellipsis shortening) changed into another (scroll bars), I had a bug in mind. Either they paid no attention, or I pointed that out too soon. Observation and reflection of the results was not as strong as idea generation. 

The third exercise was a text field in a learning API, and watching that testing unfold was one of the most interesting ones. One in the group quickly created categories of three outputs that could be investigated separately. This one was on my list because the works right / wrong is multifaceted, and in the perspective of where the functionality would be used and how reliable it would need to be. Interestingly in the short timeframe we stick with data we could easily generate, and this third one gave me a lot to think about as I made later one of my teams's developers test it, getting them even stronger into testing an output at a time, and insisting on never testing an algorithm without knowing what it includes. I regularly test their algorithms of assessing if the algorithm was good enough for the purpose of use, and found that the discussion was around "you shouldn't do that, that is not what testers are for". 

The session gave me a lot of food for thought. Enough so that I will turn this into a shorter session teaching some of the techniques of how to actually be valuable. And since my conference tracks planned are already full, I might just take an extra room for myself to try this out as the fourth track. 

Friday, September 15, 2017

Fixing Wishful Thinking

There's a major feature I've been testing for over a period of six months now.  Things like this are my favorite type of testing activity. We work together over longer period of time, delivering more layers as time progresses. We learn, and add insights, not only through external feedback but also because we have time to let our own research sink in. We can approach things with realism - the first versions will be iterated on, and whatever we are creating is indefinitely subject to change.

I remember when we started, and drew the first architectural diagrams on the wall. I remember when we discussed the feature in threat modeling and some things I felt I could not get through before got addressed with arguments around security. I remember how I collected dozens of claims from the threat modeling session as well as all discussions around the feature, and diligently used those to drive my learning while exploratory testing the product.

I'm pretty happy with the way that feature got tested. How every round of exploration stretched our assumptions a little more, giving people feedback that they could barely accept at that point.

Thinking of the pattern I've been through, I'm naming it Wishful Thinking. Time and time again with this feature and this developer, I've needed to address very specific types of bugs. Namely ones around design that isn't quite enough.

The most recent examples came from a learning that certain standard identifiers have different formats. I suggested this insight should lead to us testing the different formats. I did not feel much of support for it, not active resistance either. I heard how the API we're connecting to *must* already deal with it - one of the claims I write down and test against. So I tested two routes, through a provided UI and the API we connect with, only to learn that my hunch got confirmed - the API was missing functionality someone else's UI on top of it added, and we, connecting with the API missed.

A day later, we no longer missed it. And this has been a pattern throughout the six months.

My lesson: as a tester, sometimes you work to fix Wishful thinking. The services you are providing are a response to the developers you're helping out. And they might not understand at all what they need, but appreciate what they get - only in small digestible chunks, timed with enforcing the message into something actionable right now.

Learning through embarrassment

Through using one video from users to make a point, we created a habit of users sharing videos of problems. That is kind of awesome. But there's certain amount of pain watching people in pain, even if it wasn't always the software failing in software ways, but the software failing in people ways. 

There was one video that I got to watch that was particularly embarrassing. It was embarrassing, because the problem shown in that video did not need a video. It was simply a basic thing breaking in an easy way. One where the fix is probably as fast as the video - with only a bit of exaggeration. But what made it embarrassing was the fact that while I had tested that stuff earlier, I had not recently and felt personal ownership.

The sound track of that video was particularly painful. The discussion was around 'how do you test this stuff, you just can't test this stuff' - words I've heard so many times in my career. A lot of times the problems attributed to lack of testing are known issues incorrectly prioritized for fixing. But this one was very clearly just it - lack of testing. 

It again left me thinking of over reliance on test automation (I've written about that before) and how in my previous place of work lack of automation made us more careful. 

Beating oneself up isn't useful, but one heuristic I can come up with this learning: when all eyes are on you, the small and obvious becomes more relevant. The time dimension matters. 

I know how I will test next week. And that will include pairing on exploratory testing. 

Saturday, September 9, 2017

Burden of Evidence

This week, I shared an epic win around an epic fail with women in testing slack community, as we have a channel dedicated for brag and appreciate - sharing stuff in more words than on twitter, knowing your positive enforcement of awesome things won't be taken against you. My epic win was around case of being heard on something important, where not being heard and not fighting the case was the epic fail.

On top of thinking a lot around the ways of how I make my case to get feedback reacted on, like bringing in real customer voice, the person I adore the most in the world of testing twitter tweeted just on the topic.
I got excited, thoughts rushing through my head on the ways of presenting evidence, and making both the objectively and emotionally appealing case. And it is so true, it is a big part of what I do - what we do in testing.

As great people come, another great one in twitterverse tweeted in response:
This stopped me to think a little more. What if I did not have to fight, as a tester, to get my concerns addressed? What if the first assumption wasn't that the problem isn't relevant (it is clearly relevant to me, why would I talk about it otherwise?) and that burden of evidence is on me? What if we listened, believed, trusted? What if we did not need to spend us much time on the evidence as testers, what if hunch of evidence was enough and we could collaborate on getting just enough to do the right things as response?

Wouldn't the world this way be more wonderful? And what is really stopping us from changing the assumed role of a tester from being one with the burden of evidence to someone helping identify what research we would need to be conducting on the practical implications of quality?

Creating evidence takes time. We need some level of evidence in the projects to make right decisions. But I, as a tester, often need more evidence than the project really needs just to be heard and believed.  And a big part of my core skillset is navigating the world in a way where when there's a will, there's a way: I get heard. It just takes work. Lots of it. 

Talking to 120 people just for a conference

I'm doing the last two days of intensive discussions with people all around the world. Intensive in the sense that I hate small talk. These people are mostly people I have never met. It could be my worst nightmare. But it isn't. It's the best form of socializing I can personally think of. 

We had 120 people submit for European Testing Conference 2018. Instead of spending time reading their abstracts, we spend time talking to the people, to hear not just their pitch but their stories and lessons. Each one gets 15 minutes. In this 15 minutes, they both introduce all their topics, but also, get feedback on what we heard (and read during the session), and as an end result, we walk out with one more connection. 

Entering the call, some people are confused on why we care so little on introduction and professional history. We assume awesome. We assume relevance. And we are right to assume that, since each and every one of us has unique perspectives worth sharing. Connection is a basic human need, and volunteering to speak on something that matters to you makes you worthy.

The discussion is then centered around what ever topic the one with a proposal brought into the discussion. It's not about small talk, but about sharing specific experiences and finding the best voice and story within those experiences to consider for the conference.

When I tell people about the way we do things for the call of collaboration, the first question tends to be around use of time. 120 people, 15 minutes, that is 30 hours! And not just that, we pair on the discussions, making it even a bigger investment. But it is so worth it.

The investment gives the conference a real idea of what the person will bring and teach, and enables us to help in avoiding overlap and build a fuller picture of what testing is about. Similarly, it gives us the best speakers, because we choose based on speaking not writing. It brings forth unique aspects of diverse perspectives that enable us to balance our selection. The conference gets an awesome program and I do not know any other mechanism to build a program like this.

The investment gives the people collaborating with us a piece of feedback they usually never get. They get to talk to organizers, hear how their talks and topics balance with the current topics people are sharing. Many people come in with one topic, and walk out with several potential talk topics. And even if they get nothing else, they get to meet new people who love testing from some angle just as much as they do. 

The investment gives most to me personally. With only 30 hours, I get to meet some of the most awesome people in the world. I get private teaching, specifically answering questions I raise on the topics we talk on. I get to see what people who want to speak share as experiences, and I get to recognize what is unique. I become yet better at being a source of all things testing, who can point out where to go for the deeper information. It improves my ability to google, and to connect things I hear into a view of world of all things testing. 

I've met awesome developers, who enforce my restored belief in the positive future of our industry. I've met testers and test managers, who work the trenches getting awesome value delivered. I've met designers and UX specialists, who want to bridge the gaps we have between professions and share great stuff. Some stories teach stuff I know with a personal slant. Some bring in perspectives I wasn't aware of. 

It's been a privilege to talk to all these people. I see a connection from what we do for collaboration calls to what we do with our speed meet session. We give every one of our participants a change to glimpse into stuff the other knows without small talk. A connection made can turn into a lifetime of mutual learning. 

Monday, September 4, 2017

Funny how we talk of testing now

Most of the time, I just do what I feel I want to do. And I want to do loads of exploratory testing, from various APIs to whatever is behind them, all the way to a true end to end.

Today, I however needed, for finally getting to a point of end-to-end exploration, and idea of when the last piece in my chain is available. So I asked:
I want to test X end-to-end, any idea when the piece-of-X is available? 
The response really surprised me. It was:
A is working on such an end-to-end test.
I was almost certain that we were confusing test (the artifact) and test (the activity) here, so I went on to clarify:
This test is exploratory (not automated) and with focus on adding information on end user experience. Is that what A does as well?
I got a quick no. And a link to one single test automation case that the team has agreed to add, for quite a simple positive end-to-end cases.

As I did not test yet, I have no idea what more will I find. Most likely some. Almost every time some.

I'm happy that the end-to-end automation case will end up existing and monitoring what there is. But surely that is not what testing is all about?

It's fascinating how quickly this degeneration of talking around test happens. How quickly the activity turns into specific artifacts.

It takes a belief into the unknown unknowns to get people to explore, when they think they can plan the artifact. Communication gets "fixed" by using always more words, rather than less.

Saturday, September 2, 2017

Would You Just Listen for a Change?

Six months ago, a new project manager walked in to our team planning meeting. It was loud, arguments were flying and there was a lot of laughter. The conclusion of our team dynamics was that we are like an Irish family. I think that was a compliment. 

I absolutely adore my team. I love how the developers have embraced unit testing long ago, so that they do it automatically and seek new ideas on it. I love how the only reason stopping them from doing even better is invisible barriers of experiences of being punished or not supported. And I love how together, I truly feel we are more awesome. I feel at home when at work, I feel I get to be me. And that isn't true for all the places I've worked with.

There are some things that really leave me pondering though, and one of them in particular comes through my feeling of gendered reactions. These are not things where I'd even hint on discrimination. But they are more of things of structural nature, the expectations of how we are supposed to be. I can also argue they are not about gender, but about personality. It just happens that particular traits, while not only in a particular gender, seem to be more often assigned to a particular gender.

But, let's get to my two stories. 

In June, I got a new team member. For the first week we worked together, I got him to pair with me (and my current team *never* pairs, we just talk), strong-style. We tested together, we both contributed, took turns on driving and navigating and did a better job testing our firewall together than neither of us would have done alone. He was 15-years old, and I was delighted to hear in the end of his summer job from other senior testers how much he had grown as a tester, in respect to how he had been earlier when doing training sessions with us. He is awesome, and pairing with me made his awesomeness develop into something really useful.

However, on week two I was less available, and he got to run with testing a feature of his own. The developers in the team took a different approach to testing than I did, giving him a step by step test case, instructions how to gather logs and evidence, and ended up guiding him into a bit of a mindless task he was not in full control of due to the appearance of detailed instructions. 

His test results showed that there might be a problem. I tested sampling some of the same things, realizing the problem was in the instructions. But while I did this, a developer in my team took the first results (after I had said I will do a bit of a quality control task) and escalated the worrying results to high level managers. 

He did not hear me say I believed there was a problem with the results. 
He was delighted on how valuable results the summer intern had provided.
He did not intend to make me feel like my results are not equally valuable. 
He did not realize that he has *never* escalated any of the stuff I find, even when there is a reason for doing just that. 
He did not understand that he trusted a 15-year old intern two weeks into his first job ever more than someone with 23 years of hands-on testing experience.

He did not understand that no one has ever trusted me in the same way they trusted this 15-year old. On false results. I've fought every bit of important information visible. And I've become good at that. Fighting with a smile. But there's always the extra effort. I need to amplify my voice, myself. Luckily, good relations with the managers and a great track record in providing helpful information are all the amplification I usually need. 

With this experience behind us, I put my team through a human experiment. In the name of team building I took us to an escape room, to collaborate. There was a detail I did not mention to my team: I had been there before. Which means this time, I had all the answers and I wasn't planning on giving them out, I was just there as a pair of hands and eyes leaving most of the puzzle for my team mates. My brain was free to monitor on how we worked together, and I was fascinated with what I learned.

I learned that when I gave a piece of information first creating certainty of attention (touch of a shoulder), I got listened to. 
I learned that when I had information people did not want to accept, they completely dismissed me. I had to shout or do to be heard.
I learned I get heard best when I amplify something that comes from someone else. 
I learned that even with information that should be easy to accept, I had to find the right person to give it to. 
I learned they would take the credit. 
I learned that I do need to put in the extra effort, but I get heard. The other woman in my team does not, without me amplifying her. 

In a world where a 15-year old boy is a more trustworthy source than me, I find the problem is just the idea of listening and appreciating others inputs. And instead of teaching people what I have been taught (persistence, increase volume, find allies, use other people's voice when yours is dismissed) I just wish people would listen for a change. 

I have my share of work to do on listening as well.

We're getting worse at testing

A common theme of what many testers (me included) want to talk about is the negative impacts of automation. I'm at a point where I've definitely come to terms with the idea that automation is a good thing. I've grown to see that my old fight against it was a reason for failing and work hard, most of the time, to work against my natural instinct giving test automation time and focus it needs to become great. Time to fight is time away from improving. And I know that I can help improve it, significantly. 

One of the ways this revelations of mine shows is that within the organizer group of European Testing Conference, we have decided not to accept anti-automation talks. We all know it has limits and negatives. But we want our focus to be on finding ways around the problems, practical solutions, insights and ways forward. 

One of the calls with four proposals included a talk that I felt belonged in the category we wouldn't feel like giving stage to, yet the even short discussion with Jan-Jaap Cannegieter was inspiring. He introduced me to a book by Nicholas Carr called Glass Cage  and it's core message of how automation is making us more stupid, forgetting how to do the things without automation.

I work with a team highly divided on our focus on automation. And a regular discussion with the person focused on testing through automation is on *how are we testing this*. The discussion as such is not the interesting part, but the pattern of how that discussion goes. The reliance on code to see what it does, the inability to talk on level of concepts or even remember what has been covered on high level without looking at the code is evident.  The same question asked on ones with exploration approach starts with areas, features and only last details that could or could not be documented.

It would seem tempting to say that automation is making us stupid. It would feel tempting to say it reduces our ability to see our testing, and to explain our testing conceptually, while adding to ability to cover our asses showing the exact detail of what is covered that I personally find the least relevant. 

Jan-Jaap made a point of us forgetting with the extensive focus on automation how to talk about coverage and test techniques. Yet, I just few days ago had a fascinating and insightful discussion on someone else submitting on Test-Driven Development, giving insightful examples of how TDD has made them test with several positive tests and cover more ground of the actual solution domain. 

So have we forgotten what it is to test? Where do the new generation of automation first testers learn that? Clearly many of them haven't, and get very easily fooled by opportunity cost of doing the best thing possible only in the automation context optimizing for long-term. 

Then again, looking at the 120 submitters and 200 topics proposed for European Testing Conference. Not a single one on a practical use of a test technique to analyze a problem for coverage. Not a single one teaching how you test. We found some hidden in the talks of process and company experience, but not on the active submissions.

Perhaps the problem isn't automation. Perhaps it's the way we talk of testing - as in not talk of it. With a few notable differences.

Share more of how you test. That is valuable and interesting. It's not automation that makes us worse at testing, it'd our choice of letting the automation (and the programming problem of it) to take all focus and stopping our talk around the domain: how do we test. 

My hope with automation is on the programmers who no longer need to use their learning power on the details of scripting, solving (through learning) the higher level problems of domain. But every day, I feel less inclined to believe in the tester-turned-automators. They need to amp up learning in a balanced way to restore my faith. 

Tuesday, August 29, 2017

Collaboration Call at Its Best

I've been giving some intensive hours into getting to know loads of awesome people within timeboxes of 15 minutes. We call that European Testing Conference Collaboration Call, where we (organizers and potential speaker) meet to understand the potential talk better.

We are doing this with the 110 people who left their contacts for us when we called for collaboration and a selection of others we find we would like to consider (e.g. mentions from those who submitted), totaling somewhere around 150 discussions. We do most of this paired to make sure we hear from a tester and a developer perspective.

While the hours sound high, we feel this is an investment into a wider scope of things that just the immediate selection. We don't think of it as interview, but we approach it as a discovery session. We discover similar and different interests and viewpoints, to build a balanced program of practical value that is raising the bar of software testing.

150 people means somewhere in the neighborhood of 200 talks. And the conference program for 2018 fits 14.

I've been delighted with each and every discussion, getting to know what people feel so strongly on that they want to share them in a conference. I've learned that two thirds of people I would never select based on the text they submit, but that I can get excited on their talks when I speak with them. Some would become better with a little guidance. Sometimes the guidance we fit into the 15 minutes is enough to make them better.

Most of the calls end up with "I already see stronger proposals in the same category" or "We'll keep you on list to see how that category builds as we continue doing the calls". Today's call was the first one where we ended with "Welcome to European Testing Conference 2018 as a speaker".

The call was how I think of a collaboration call at its best. This time a 1st time speaker had submitted a title (with remark 'working title') and an abstract of two sentences. As they went through the talk proposal, it sounded exactly as many others: how testers can provide value other than programming. At one point of the story, half a sentence was something like "I introduced them (programmers) to heuristics and oracles" with explanation around it making it obvious this lesson was well received and useful. In the end we told what we heard - a story that was relevant and shared by many. And a piece that should be a talk in its own right.

With a bit of collaboration, that piece around heuristics seemed to take form. And knowing what is on the list to consider as the call is closed, this is the thing we want to show - testing as testers think of it. Practical, improving our craft.
It's a talk that would not exist without this 15 minute call.
It's still open if that talk will be seen, as anything that emerges that suddenly deserves a thinking time, especially for the presenter. We would never want to get people committing to talks that they don't own themselves. And many of us need introspection through quiet time.

I just wish I had 150 slots available to share the best out of every one of these unique people we get to talk to. So much knowledge, and wonderful stories of how the lessons have been learned.

Tuesday, August 22, 2017

A look into a year of test automation

It's been a year since I joined, and it's been a year of ramping up many things. I'm delighted about many things, most of all the wonderful people I get to work with.

This post, however, is on something that has been nagging on the back of my head a long time, yet I've not yet taken any real actions on doing anything other than thinking. I feel we do a lot of test automation, yet it provides less actionable value that I'd like. A story we've all heard before. I've been around enough organizations to know that the things I say with visibility into what we do are very much the same in other places, with some happy differences. The first step to better is recognizing where you are. We could be worse off - we could be not being able to consider where we are with evidence of things we've already done.

As I talked about my concerns out loud, I'm reminded of things that Test Automation has been truly valuable on:
  • It finds crashes where human patience of sticking around long enough will not do the job, and makes random crashes into systematic patterns with saving results of various runs
  • It keeps checking all operating systems where people don't do that
  • It notices side effects on basic functionality in an organization where loads of teams commit their changes on the same system without always understanding dependencies
However, as I've observed things, I have not seen any of these really in action. We have not  built stuff that would be crashing in new ways (or we don't test in ways that uncover those crashes). We run tests on all operating systems, but if they fail, the reasons are not operating system specific. And there's much simpler tests than what we run to figure out that the backend system is again down for whatever reason. Plus, if our tests fail, we end up pinging other teams on fixes and I'm growing a strong dislike on the idea of not giving these tests for the teams themselves to run that need pinging.

Regardless of how I feel, we have now invested one person and a full year into our team's test automation. So, what do we have?

We have:
  • 5765 lines of code committed over 375 commits. That means that we do 25 pull requests a month, of average size 15 lines per commit.
  • The code splits into 35 tests with 1-8 steps each. My reading perception is that I'm still ashamed to call the stuff these tests do testing, because they cover very little ground. But they exist and keep running.
  • Our test automation python code is rated 0.90/10 with Pylint. The amount of complaints is  2839 things. That means that every second line needs looking into. The number is worse as I did not set up some of the libraries yet.
In the year, I cannot remember more than one instance where the tests that should protect my team (other teams have their own tests) have found something that was feedback to my team. I remember many cases where while creating test automation, we find problems - those problems we could find also just diligently covering manually the features, but I accept that automation has the tendency of driving out the detail.

I remember more cases where we fix automation because it monitors things are "as designed" but design is off.

I know I should do something about it, but I'm not sure if I find that worth my time. I prefer the manual approach most of the time. I prefer to throw away my code over leaving it running.

There's only one thing I find motivation in while considering I would jump into this. It's the idea that testers like me are rare, and when I'm gone, the test automation I help create could do some real heavy lifting. I'm afraid my judgement is that this isn't yet it. But my bar is high and I work to raise it.

As I write this post, I remind myself of a core principle:
all people (including myself) do the best work they can under the pertaining circumstances.
Like a colleague of mine said: room for improvement. Time to get to it.

Friday, August 18, 2017

Making a Wrong Right

I've coached my share of awesome speakers to get started in speaking. That makes me a frequent mentor. I'm also an infrequent sponsor, and would love to find possibilities to make that more common.

This week, one of my mentees spoke up in a group of women in testing about a challenge she was facing in speaking. She had been selected to speak at EuroSTAR, which is a #PayToSpeak conference meaning you pay your own travel + stay. When planning for the conference, she had made plans with her company support, but things changed and that was retracted.

She was considering her options. Cancelling, but that seemed hard after all the program was already out. Showing up just for the talk to minimize the cost. And that was pretty much it.

She asked around for advice on others getting their companies to pay for travel, to hear that it was not an uncommon thing amongst the group of women, even the frequent speakers, that their employers don't pay the travel. The conferences really should pay the travel and stay for their speakers. And the good ones do.

I was delighted to have the opportunity to step up and offer a travel scholarship for this particular case. Second year in a row, I'm using my non-profit to pay a Speak Easy connected minority speaker to go to EuroSTAR, to speak and use the full opportunity of learning. I call EuroSTAR my favorite disliked conference, as they really should change their policy. And while I can't change them, I can change a small part of the pain the #PayToSpeak policy causes.

I can make one small wrong right. This speaker is awesome in many ways, and an inspiration to me.

Just briefly checked some conferences that pay their speakers and some that don't. Unsurprisingly, the ones that pay their speakers have a much more natural gender balance.

I can correct one wrong. The power of correcting the bigger wrong lies with the conference organizers. 

Tuesday, August 15, 2017

Dare to be the change you want to see

"Thank you for not having the steering group preparation meeting", he carefully said after the 1st steering group meeting after the holidays. I probably looked puzzled, as for me it was obvious and part of what I had signed up for. I wasn't about to turn into a project manager in an awesome place that has no scrum masters and usually also no project managers. I'm a hands-on tester. But when the previous project manager stepped down and I saw a need for people to meet weekly to exchange management-level news (to leave teams alone unless they pulled the info in), there was no other option than promising to hold the space for the steering group to happen.

Let me go back a little in time. I joined a year ago, and we had no project manager. We had teams of software engineers and quality engineers, and as with new teams, we were finding our way with the guidance of self-organizing. Many of us were seniors, and we got the hang of it.

Meanwhile while we were stumbling to form as individual teams and establish cross-team relations across two sites, someone got worried enough to "escalate". And escalation brought in a project manager.

The project manager visibly did two things. He set up a steering group meeting, where I ended up as a member ("just a tester, but none of us is *just* anything these days"). And he set up a cross-team slot. He was probably trying to just create forums, but they felt more of ceremonies. The cross-team session was a ceremony of reporting to him, as much as he tried to avoid it. And the steering group was a ceremony of reporting cleanly to the management, as it was always preceded with a prep meeting as long as the actual meeting, but only 3 out of 8 people present.

As the project manager left for other assignments, teams abandoned the cross-team slot and started more active 1:1's as they sensed the need. Out of 10 of us, only 2 strongly stated over time the slots were not a good use of time, yet everyone was keen to give them up. Others just came, because they were invited.

And similarly, the steering group meetings turned into actual discussions, creating feeling of mutual support and sharing without the pre-meeting. I stated I was there to hold the space, and that's what I do. I start discussions, and end them if they don't fit the idea of what this meeting is about as per our mutual understanding.

But for the last 6 months, I did not like the way we did things. Yet I too, while expressing my feelings every now and then, went with the motions. I only changed when the environment changed.

All of this reminds me to be more brave: dare the be the change you want to see. Experiment with fixes. And not only when the people leave, as they were never the real bottleneck. It was always in our head. My head amongst the others. 

Friday, August 11, 2017

A Serendipitous Acquintance

We met online. Skype to be precise. Just a random person I did not know, submitting to our conference. And we talk to everyone on Skype.

As the call started, we had our cameras on like we always do to begin a call, to create a contact between people instead of feeling like a phonemail of strangers. And as his camera was turned on, we were in for a surprise. It was clear we were about to talk to a teenage boy who had just submitted to a testing conference.

We talked for 15 minutes, like with everyone. It was clear that based on his talk proposal, we would not be selecting him. But listening to him was different. His thoughts were clear and articulated. He was excited about learning. He was frustrated about people dismissing him - he had submitted to tens of conferences, and we were the second he would hear back from. We asked him questions, poked his experience and message and got inspired. Inspired enough to suggest that regardless of what would be our decision on this conference, I would be delighted if we would accept my help as a speaker mentor, and I could help him hone his message further. He had delivered a keynote in Romanian Testing Conference through local connections, and was driven for more. 15 minutes was enough to realize that Harry Girlea is awesome.

When later met him for going through his talk and talked for 40 minutes, the first impression strengthened. This 13-year old is more articulate than many adults. When he told me stories of how wonderful he felt testing with professional games testers in game realms, I could hear he was proud of his learnings. And when he coined why he loves testing as "As tester, things are similar but are never the same“, all I could do is say that with my extra 30 years of experience, I feel the same.

It became clear that he wanted to be a bigger part of it, speaking in conferences and learning more on testing.

We improved his talk proposal, and he submits again. For European Testing Conference, we have not made our choice yet. But I hope we are not the only ones seriously considering him.

The kids of today learn fast. Us adults have lot to learn from them.

Thursday, August 10, 2017

We don't test manually at all

We sat in a room, the 7 of us. It was a team interview for a new candidate and we were going though usual moves I already knew from doing so many of these in the last few weeks. And as part of the moves, we asked the programmer candidate on how they test their code.

It wasn't the candidate that surprised me, but one of my own team's developers, who stated:
"We don't test manually at all".

My mind was racing with thoughts of wonder. What the hell was I doing if not testing? How could anyone think that whoever was figuring out scenarios, very manually wasn't doing manual testing at all? Where has my team's education failed this much that any of them could even think that, let alone say it out loud?

At the team room, I initiated a discussion on the remark to learn the meaning of it.

What I was doing wasn't included (I do a lot of exploratory testing and find problems) because I refuse to test each build the same way.

What the developers were doing wasn't included because manual testing targeted for a change is just part of good programming.

Figuring out scenarios to automate and trying them out seeing if they work when turned into code and debugging tests for failing wrong (or not failing right) wasn't included because it is part of test automation.

So I asked what then was this infamous manual testing that we did not do? It is the part of testing that they consider boring and wouldn't label intellectual work at all. The rote. The regression testing done by repeating things mindlessly without even considering what has changed, because there could be things that just magically broke.

We test manually, plenty. We are no longer mindless about it. So I guess that's what it really means. Not manual, but brain-engaged.

I can just make sure that people who matter in recruiting make sure someone is particularly well brain-engaged when joining the teams. That someone sometimes is not the tester who specializes in automation. 

Sunday, August 6, 2017

Community over Technology in Open-Source

So, you have created an open source tool. Great, congratulations. I'm still planning on adding mine to the pile. But let me say something I wonder if techie people understand: *your tool is not the only tool*. And as a user of these tools, I'm feeling the weight of trying to select one that I want to even look at. I've looked at many, to find myself disappointed in something I find relevant to be missing. And yes, with an open source tool, I've heard the usual mantra that I can just change it. But forking my own version to have faster control over it creates a merge hell, so you better make sure you let things in the main repo fast enough and not leave them hanging in the pull requests queue.

There's loads of awesome open source tools, but the user challenge is no longer so much about finding some, but finding one that is worth investing your time on. Having something die out of your tool stack and replace it creates distraction. So most of us go for tools with good communities. Tech matters less than the community.

With European Testing Conference Call for Collaborations, many people who have created a tool propose a talk on that tool. A quick and simple search to github tells me there are 1,004,708 repository results for "testing" and over the two years of these 15-minute calls, I've got a small insight into maybe a hundred people creating and maintaining their own tools, wanting to share their awesomeness.

Last year we defined what kind of things we might consider, saying that it has to be either an insightful idea that anyone could relatively easily bring into their own testing frameworks or something that an open source tool supports. This year, I'm learning to add more requirements to the latter.

The open source tool is not of support if it does not have a proper community. There needs to be other users and active core group answering questions and improving the experience of getting introduced into the tool. But also, it matters now more to me how the core group deals with their project.

If I see pull requests that have been in the queue for a long time, it hints to me that the community contributions are not seen as a priority.

Building and supporting a community takes an effort. I see some projects understand that and emphasize a community that welcomes contributions, while other treat the community more as outsiders.

I'm grateful for the 15 minutes of insight into tools I would never given even that time unless I had the main contributor as my guide in the call, wanting to share on their project at one of the limited spots of the conference. For a conference, any conference not just European Testing Conference, the organizers are always working against the idea of a limited budget of spaces. and that gives an indication that out of a typical 10-20 slots in a conference, not all of these tools will ever be presented.

What are the tools that are worth the spots then? Selenium / Protractor are clearly choices of the community already. Others need to have a common problem solved in a particularly insightful way and life ahead that the community can believe in.

Community is more relevant. 

Wednesday, July 26, 2017

Making a conference talk practical (for me)

I've again had the annual pleasure of talking *amazing* people from around the world, both seasoned speakers and new, and get inspired by their stories. It pains me to know that a small percentage of all the awesome people can be selected, and that our specific selection criteria of practical relevance makes it even harder for many people. Simultaneously, I'm delighted to realize that while I may say no on behalf of European Testing Conference 2018, I could help those people make their proposals stronger for other conferences.

Today, however, I wanted to write down my thoughts on what is a talk that is practical, to me.

I've had the pleasure of listening to lots of presenters on lots of topics, and over time, I've started recognizing patterns. There's one typical talk type, usually around themes such as security testing, performance testing, test automation and shifting work left that I've categorized into a talk about importance of a thing. This is one where the core message is selling an idea: "bringing testers into the whole lifecycle in agile is important". "Test automation is hard and important". "Performance testing continuously is important".

I get this. Important. But I know this. My question is, if it is important, what do I do. So here are stories I'd rather hear that make this practical.

1) I sort of knew X was important,  but we did not do it. We failed this way. And after we failed, we learned. This is specifically what we learned and how we approached solving the problem. So, learn from my mistakes and make your own.

2) I'm an expert in X, and you may be an expert or a novice. But if you try doing X, you could try doing this specific thing in X in this way, because I find that it has been helpful.  This answers your questions of how after quickly introducing what and enables you to leave this conference knowing what you can do, not just that you will need to do something.

3) Here's a concept I run into. Here's how I applied it in my project, and here's what changed. Here's some other concepts we're thinking of trying out next.

Assume important or necessary is a prerequisite. What would you say in your talk then? 

Tuesday, July 25, 2017

Greedy Speakers are the Death of conferences

Conference organizing is hard work. Lots of hours. Stress over risks. '

But it's also great and amazing. Bringing people together to learn and network makes me feel like I'm making a small difference in the world.

And for me in 2017 it has also been losing some tens of thousands of euros on organizing a conference that was totally worth the investment, regardless. 

I organize conferences idealistically. My ideology is two-fold: 
  1. I want to change the world of conferences so that money isn't blocking the voices from getting to stage. 
  2. I want to raise money to do more good by supporting speakers for conferences that don't pay the speakers.
I also organize without raising money, and I've made organizing without any money a form of art for myself in the last 15 years. But that's local meetups, and I do a lot of them. I have four coming up in the next month. 

I'm tired of conferences, where majority of speakers are vendors, because they have an interest in paying for the speaking. I want to hear from practitioners, and sometimes consultants if they keep the selling to a minimum. Bottom line is that all speakers have something to sell anyway, their personal brand if nothing else.

I would like to believe that conference going is not a zero sum game, where choosing one is away from the other. People need places where to share, and there's a lot of people to listen to various perspectives. But I also feel that people need to make choices in which conference they go to, with their limited budget. Cheap conferences are great, it enables your organization to send more people out. But conferences are cheap if the money comes elsewhere. And this elsewhere is sponsors and speakers as sponsors, paying their own way to work for the conference.

Being able to afford the cost is a privilege not everyone has. I would like to see that change and thus support the idea of Not Paying to Speak at Conferences. And this means travel + hotel paid. No fancy expense accounts, not even paying for the hours of work to put into the talk you're delivering, but taking away the direct cost.

Conferences that don't pay but yet seek non-local voices have made a choice of asking their speakers to sponsor them and/or the audience (if truly low-cost). If they're explicit about it, fine.

The could choose to seek local voices so that travel and expenses are not relevant. But they want to serve the local community with people's voices that travel, and people (who can afford the travel in the first place) have the freedom to make that choice. The local community never has a chance of hearing from someone who won't travel. They haven't heard that voice before, and still won't. And the ones who can't afford (I was one!) can be proud and choose to remain local, rather than go begging for special treatment. Some people don't mind asking.

I wrote all of this to comment on a tweet:
I've been told that travel expenses for the speakers and in particular paying the speakers is the death of commercial conferences too. They need to pay the organizers salaries. It's a choice of ticket pricing and who gets paid first. Local conferences don't die for travel expenses, if they work with local speakers. But they tend to like to reach out to "names" that could bring news from elsewhere to this local community.

The assumption is that a higher ticket price is death of a conference. It's based on the idea that people don't value (with money) the training they're receiving. Perhaps that is where the change needs to be - expecting free meals.

I can wholeheartedly support this: 
Do that even if you're not a first time speaker. There's nothing wrong with building your local community through sharing. It might give you more than the international arenas.

Greedy speakers are not the death of conferences. There's conferences with hugely expensive professional speakers that cost loads, and still fill up. If anything is death of conferences, it's the idea that people are so used to getting conferences free that they don't pay what the real cost of organizing a *training* oriented conference is.

Luckily we have open spaces where everyone is equal and pays. We're all speakers, all participants. Conferring can happen without allocated speakers, as people meet.

Saturday, July 22, 2017

A Team Member with Testing Emphasis

Browsing Twitter, I came across a thought-provoking tweet:
Liz Keogh is amazing in more ways that I can start explaining, and in my book she is a programmer who is decent (good even) at testing. And she understands there's still more - the blind spots she needs someone else for. Someone else who thinks deeply and inquisitively. Someone else who explores without the blind spots she has developed while creating the code to work the way it's supposed to.

Liz is what I would call a "team member with programming emphasis". When asked to identify herself, no matter how much she tests, she will identify as a programmer. But she is also a tester. And many other things.

Personally I've identified as a "team member with a testing emphasis". That has been a long growth from understanding why would someone like Ken Schwaber years and years ago suggest to my manager that I - who want to be a tester - should be fired. Over thinking about it, I've come to the conclusion that this is one of the ways to emphasize two things:

  1. We are all developers - programmers, testers and many others 
  2. We need to work also outside the focus silos when necessary or beneficial
For years, I did not care so much for programming so I found a way to call myself that I was more comfortable with than a "developer" which still is loaded heavily on programming. I became a self-appointed team member with a testing emphasis.

This works still, as I've grown more outside my tester box and taken on programmer tasks. It means that while I code (even extensively, even production code not just test code) the tester in me never leaves. Just like the programmer in Liz never leaves. 

Liz can be a brilliant tester she is in addition. And I can be a brilliant programmer I intend to be. And yet she can still be the programmer, and I can still be the tester. 20+ years of learning allows growth outside the boxes.  But it's still good to remember how we got here. 

If software industry doubles every five years, half of us have less than five years of experience. Perhaps it makes sense to learn a good foundation, starting from different angles and build on it. 

Individuals make teams. And teams are stronger with diversity of skills and viewpoints. 

Automation tests worth maintaining

A retrospective was on it's way. Post-it's with Keep / Drop / Try were added as we discussed together the perspectives. I stood a little on the side, being the loud one, leaving room for other people's voices. And then one voice spoke out, attaching a post-it on the wall:

"It's so great we have full test automation for this feature"

My mind races. Sure, it's great. But the automation we have covers nothing. While creating it for the basic cases, we found two problems. The first one was about the API we were using being overly sensitive to short names, and adding any of those completely messed up the functionality. I'm still not happy that the "fix" is to prevent short names that otherwise can  be used. And the second one was around timing when changing many things. To see things positively, the second one is a typical sweet spot for automation to find for us. But since then, these tests have been running, finding nothing.

Meanwhile, I had just started exploring. The number of issues was running somewhere around 30, including the announce of the "fix" that made the system inconsistent and I still deem as a lazy fix.

I said nothing but my mind has been racing ever since. How can we have such differences of perspectives on how awesome and complete the automation is? The more "full" it's deemed, the more it annoys me. I seek useful, appropriate and in particular over longer time not just on time of creation.  I don't believe full coverage is what we seek.

I know what the automated tests test, and I often use those as part of my explorations. There's a thing that enables me to create lists of various contents in various numbers, and I quite prefer generating over manually typing this stuff. There's simple cases of each basic feature, that I can run with scripts and add then manually aspects to what I want to verify in exploration. I write a lot of code, extend what is there but I rarely check in what I have - only if there was an insight I want to keep monitoring for the longer term future.

Cleaning up scripts and making them readable is work. Maintaining them when they exist is work. And I want to invest in that work when I believe the investment is worthwhile.

The reason I started to tell this story is that I keep thinking that we do a lot of harm with the "manual" vs. "automated" testing dichotomy. My tests tend to be both. Manual (thinking) is what creates my automation. Automation (using tools and scripts) is what extends my reach in data and time.

Tests worth maintaining is what most people think with test automation. And I have my share of experience of that through experimenting with automation on various levels. 

Wednesday, July 12, 2017

Is Mob Programming just Strong-style Randori?

Back in the days before Mob Programming was a thing, there was a way of deliberate practice referred to as Randori. The idea there was pretty much similar to what the mechanics of mobbing are. There would be a pair out of a group working at a time on a problem, and then you'd rotate.

My first Randori experience was a long time before I ever heard someone was working in this "mob programming" style, and on a shallow level, the difference I saw from my first introductions to mob programming was the use of strong style navigation. So the question emerged: is mob programming really just a strong-style Randori?

I'm blogging since I listened in to a discussion where Llewellyn Falco was explaining a saying he likes:
Pool is not just a bigger bath tub.
Surely, pool is a container with water in it. So is a bath tub. But the things you can do with a pool are significantly different from the things you can do with a bath tub.

Examples popped out: there's such a thing as a pool guard, but it would make no sense to have a tub guard. Pool parties are a thing, but you might argue that a tub party is a very different thing. The physical exercise aspects of pools are non-existent in tubs, you wouldn't really say you swim in a tub.

While it is a fun word game to make one think, it is a good way of illustrating why mob programming is not just a Strong-style randori. What mob programming as Woody Zuill and his team introduced it brings in is hundreds of hours of learning while continuously collaborating, and with that communication some of the problems we see no ways of making go away just vanishing.

Doing things over long time to grow together make it different. Mob Programming is different.

And the same applies to many of the things where we like to say that this "has been around for ever". Test after and test first are essentially different. The things we can do with continuous delivery are essentially different to just continuous integration.

Tuesday, June 27, 2017

Incompatible cultures

A few weeks back, I started a talk on introducing myself as someone who is not officially responsible for anything, which makes me unofficially responsible for everything. I also talked about how with working in self-organized teams, I find myself often identifying the gaps and volunteering for things that would otherwise fall between.

I'm a big believer in self-organization, and people stepping up to the challenges. I know self-organized teams make me happy, and I wouldn't care to work in any other way.

A lot of communication is one on one, so to talk to my team, I've come to accept that the discussion can come through any of my team mates. There's no "I must be invited to a meeting", but there's "the team representation needs to be present in the meeting". We learn from each other a lot on what questions the others would like answered, and a lot of times whoever acts on the information is the best person to be in the discussion, over someone with assigned power.

I've seen what assigned responsibilities do: they create silos and bottlenecks, that I spend time bringing down. And yet, culturally some people just can't believe there is such a thing as self-organized team - there must be a responsible individual.

I run into this collision of ideas today, as I was seeking a bigger research->delivery task for my team to complete during the difficult summer period when some are here and some are away, and lack of shared responsibilities really shows its ugliest side. As I was asking, I heard that one of my team members has been "assigned responsible" for the research, and the rest of us just do tasks he assigns.

I felt the urge of fleeing. Instead, I wrote this down as a reminder for myself to work more on what I believe an efficient R&D to be: self-organized, with shared responsibilities.

I wonder if that will ever fit the idea of "career advancement" and "more assigned responsibility". Time will tell.

Minimizing the feedback loops

As summer vacations approach, I'm thinking of things I would like to see changed where I feel a recharge is needed before I can take up on those challenges. And I'm noticing a theme: I want to work on minimizing the feedback loops.

The most traditional of the feedback loops is to have the feature just implemented in the hands of the users. I keep pushing towards continuous releasing and related cultural changes in how we collaborate on making the changes that get published.

But it's not just pushing the changes for the end users to potentially suffer from. There's a lot of in-company feedback that I'd like to see improve.  I get frustrated with days like yesterday when all test automation was failing and I still fail to get introduced the changes that would stop the automation from failing from a single prerequisite outside my teams powers. People like walking on roads travelled before, when there would be opportunities for better if we seek out ways to do things differently.

The feedback loop that seems the hardest is the one of collaboration. We co-exists, in very friendly terms. But we don't pair, we don't mob and we don't share as I would like to see us share.

Maybe after the vacations, I will just push for experimenting while making others uncomfortable, in short time boxes. It's clear there are things to do that will make me uncomfortable alone as well, but the ultimate discomfort for me seems to be making others uncomfortable.


Monday, June 12, 2017

From avoiding tech debt to having tech assets

The question I always get when talking about mob programming is how could that be a better / more effective way of working than solo work. The query often continues with do you have research results on the effectiveness? 

As someone with a continuous empirical emphasis on my work as a tester, and someone with background in research work at university, I'm well aware that the evidence I care to provide is anecdotal. I have other things to do than research nowadays, and having done that I realize the complexities of it. And while anecdotes are research results, I can work with anecdotes.

One of the themes I like collecting and providing anecdotes on around mobbing is that to me it makes little sense to compare an individual task, but a chain of value delivery. Many times with mobbing, we end up with significantly less duplication of code, as someone in the group acts as the memory to tell that they are using something of that sort somewhere else.

Here's an anecdote I just today added to my collection: "QA person, where were you 9 hours ago when your knowledge would have saved us from all this work?". A team of programmers was mobbing, and wondering how to work on a particular technology. For everyone in the group, it seemed like there was some significant implementation work for somewhat of a scaffolding type of work, and the team set out to do that work. Later, another person became available to join the mob and with the knowledge available to them, eradicated all the work until that point, just having  the information available: an appropriate library for the scaffolding would already be available, and was used on the tests.

I've seen my own team talk around an implementation, starting with one strong idea, and ending up with the best of what the group had to offer. I've watched my team express surprise when days of work get eradicated with knowing the work has already been done elsewhere. I've watched them come to realization that whatever they would have implemented solo, would have been re-implemented to better match the ideas of architectural principles or the best use of common components.

I've also had chances of seeing a mob go through about ten solutions to a detailed technical problem just to find one with least tradeoffs between maintainability, performance and side-effect functionality.

A lot of times the best result - paying back in the long term - does not emerge ever from solo work. And that just makes the comparison of what did it take as effort to generate some value in mob vs. solo all the more difficult. It's not the task, it's not the delivery flow, but it's the delivery+maintenance flow that we need to be comparing.

Tuesday, June 6, 2017

Fill the Gap

About two weeks ago, business as usual, I installed a latest build to notice that clearly someone from some other team had worked on our user interface. Whatever we had done to make it nice enough had been replaced by problems I did not quite understand. Reporting the issue to offload, and focusing on other things of relevance. 

With communication through various steps on what was the status, we got the word that it would be fixed soon. Days passed, and soon wasn't soon enough. We finished another feature we needed to release, and a thing of temporary annoyance turned into release blocker. 

Friday afternoon, I decided to take a moment on the legwork to learn first that the developer making the changes left for a three weeks of vacation, and the second developer had very much partial knowledge on how the changes he would contribute made their way into the build. He also pointed out that he fixed "the issue" three hours ago and sent whatever he was doing over email to the one now on vacation. 

Asking around a little more, I learned that the thing was that was sent over email, and where it belonged - and that it was in place, yet problems still persisted. I learned to do the necessary tweaks there myself - all I needed was to know what to tweak. 

Monday started with fierce determination to get the problem over and done with. I sat down with the second developer to show him what I saw in the product, and he showed me what he saw in his component test environment. It because very obvious that the simulator he was running was not a match to the real end user environment with the problem. We narrowed down the problem into seven lines of CSS and eventually one line of CSS. 

The mystery started to unfold. The second developer would provide a piece of stylesheet that was correct. By the time it was in the product, it was incorrect. If it was as it was originally given, there would be no problem. 

Hunting down a bunch of Jenkins jobs in the pipeline, I learned the problem was on encoding a particular character that shouldn't get encoded. Speculating on the field that got encoded, we realized removing the encoding would have further effects. What came about was a funny one-hour experimenting with what could possibly work. The speculative solutions of hundreds of characters without a meaning and an argument about clear code vs. comments later, we found one that made sense and fixed it.

It all started with the idea of a bug that needed fixing. It continued to realizing that in a long chain of new and old pieces, ownership wouldn't be straightforward. And I did what we all do in our turn: identify a gap, fill the gap and collaborate on getting things forward. 

In addition to finding the gap, I sat next to people to get the gap filled. I don't need to be assigned responsible to be responsible. 

I could easily still be waiting but I'm not: I fixed the bug. 

Friday, May 26, 2017

Incremental steps to continuous releases

The last eight months for me have had one theme in particular that I consistently drive forward, in small steps that sometimes feel small enough that others don't realize how things are changing.

There's an overall vision in mind for me: I want to take us through the transformation to daily releases for the windows client + management backend product I'm working with.

Where I started from

As I joined 8 months ago, the team I joined that been working for several months on a major architectural type of change - no releases but a build that could be played with internally. We had "8 epics" to drive through the architectural changes, and none of those were done. There was a lot of dependencies all around and making a release someone would use wasn't a straightforward task.

I started in September. The first release went out November 23rd.

There's more than a decade of history on making continuous releases of the detection and cleanup functionalities within the product, but the frame of the product has been released annually or quarterly for production use, and monthly or biweekly for beta - something I was introducing here a decade ago.

When I started talking of daily releases, I was told it was impossible. It took me 4 months to get rid of the "it cannot be done" comments.

The pain of regularity is necessary

I had a firm belief (which I still hold) that when things are deemed hard, you just need to do more of them to learn how to make them less hard. So I struggled with my team through the discussions of "releasing takes too much time and is away from real work", with the support from our manager setting it a team goal tied to bonuses that we would turn our 4 day release to a 4 hour release.

Each release would see a little more automation. Each release would see a little more streamlining. We would find things that would be difficult (not impossible) to change and postpone those from focusing first on the low hanging fruit, never giving up on the ultimate goal: a touch of a button releasing to various environments.

A month ago, I could happily confirm that the 1st goal as it ended up being written down was achieved.
[Team Capability] Turn 4 day release to 4 hour release
We believe that ability to make our client releases with shorter duration will result in saved time in making multiple releases. We will know we have succeeded when team does not feel need to escalate release-making as a threat to features.
We also worked on another capability:
[Team Capability] Min 2 people can make client releases
We believe that having at least two people with skills, knowledge and accesses to make client releases will result in being able to make releases while one is sick. We will know we have succeeded when release happens without 1st key person present at office within same / similar timeframe.  
What next?

We have come to a point of bi-weekly releases, which is only taking us to the level I introduced decade ago. But building on that, the next things would be to figure out ways of not breaking the builds within the 2 week intervals, and that change takes me far away from just my own team, including changing the ways test automation supports our development.

There's still work on making the four hours into four minutes of work, and I look forward to stepping through that challenge.

Our very first production environment release was just done. With more environments in play, each 4 hours can easily grow into five times this, so that would be a next step to work on too.

So the vision I'm working for:
[Team Capability] Four-minute release throughout the environments
We believe that having a push-of-a-button release will result in us focusing more on valuable features and improvement for the user and our organization. We will know we have succeeded when releases happen on a daily basis as features / changes get introduced. 
Why would I, the tester, care for this?

I have people every now and then telling me this is not testing. But this fundamentally changes the testing I do. It enables me to test each change, isolate it and see its impacts all the way through production. It supports small, human-sized discussions on changes together in the teams and gives us an ultimate definition of done - production value over task completion.

It makes developers care about the feedback I give, and enabled the feedback to be more timely. And it makes way for the necessary amount of thinking and manual work to happen in both coding and testing so that what we deliver is top-notch without exerting too much effort into it.

Pair Testing with a 15-year-old

A few months back, I had the pleasure of working with a trainee at F-Secure. As usual in schools in Finland, there was a week of practice at work with the options of taking a job your school assigns you (I did mine at age of 15 in an elderly people home) or you can find one of your own. This young fellow found one of his own through parents, and I jumped on the opportunity to pair test with him.

At first, he did not have a working computer so it was natural for us to  get started with strong style pairing:
With an idea from my head to keyboard, it must go through someone else's hands (Llewellyn Falco)
He was my hands, as I was testing firewall. And in many ways he was a natural in this style of work. He would let me decide where to go and what to do, but speak back to me about his observations and ideas, extending what I could see and do all by myself. Most of the things we did together were things I would have done by myself. Only difference was the times of going to the whiteboard to model what we knew and had learned, where I guided him to navigate me in the ideas to document very much in the same strong style pairing. As the driver drawing, I would ask questions based on our shared testing experience when he would seem to miss a concept.

His ability to test grew fast. He learned to use the application. He learned to extend his exploration with test automation that existed and play with it to create the type of data we wanted.

My reward was to see him enjoy the work I love so much. His words on the end of our joint experience without me prompting still make me smile: "I now understand what testing is and would love to do more of it".

He joins us for a full month in June. I can't wait to pair up with him again.