How to Raise a Lean Startup – V

<< Part IV

The previous part talked about growth of a startup. But an entrepreneur needs to ensure that the growth happens in an orderly fashion, quality doesn’t get compromised in the rush for mad growth. Any shortcuts committed now would speed up the process only in the short term and would slow down future changes. This is a fact known to every good software developer. If you skimp on quality now, which amounts to incurring technical debt, you would have a tough time making changes to the code later. The only solution would be to slow down first and pay off the debt by improving the quality. Once quality is maintained, the process automatically picks up pace and cruises along smoothly.

There is root cause analysis technique called Five Whys. It allows one to get to the bottom of an issue by repetitively asking “why?” to each response. Ries suggests using the same to identify issues with the process and making a commitment for solution at each level of why in tune with the problem. Also, the following rules help build a smooth, error-free process:

  1. Be tolerant of all the mistakes the first time.
  2. Never allow the same mistake to be made twice.

Ries says that startup teams need to have a certain structure in order to have a good chance at succeeding. They need to have the following structural attributes:

  1. Scarce but secure resources: there should be just enough resources that the startup needs. Too much of it would cause wastage and too less would stifle growth. And the resources should not be poachable.\
  2. Independent authority to develop their business: because a startup is required to carry out experiments and tune its engine, it needs full autonomy to develop and market its products in order to achieve success.
  3. Personal stake in the outcome: unless the team feels personal about the product it is building and unless its own success is tied to that of the product, there’s only a slim chance of success.

Reiterating a point that was mentioned earlier in this series, a startup should focus on doing the right thing first and then doing things right. In software development, the practice of Test Driven Development (TDD) ensures that only as much code gets written as needed. Nothing more, nothing less. This blends well with one of the main principles of Lean which aims at reducing waste.

This concludes the series of blog posts on Lean Startup.

Disclosure: This series is primarily based on Eric Ries’s book The Lean Startup. I have added to it my own experience with Lean and Agile as software development methodologies.

How to Raise a Lean Startup – IV

<< Part III

The first three parts spoke about starting up your startup. This part talks about how to sustain the momentum, or better still, increase it.

Lean thinking advocates small batches of production, if possible, batches of one. This is based on the principle of keeping the feedback loop short. A short loop allows production / quality problems to surface sooner. This is why Sprints in Scrum based software development are kept as small as possible. This is why user stories in Scrum are also kept small in size. Small sizes allow incremental and iterative development while large sizes tend to list towards all-at-once delivery.

Continuous Integration, a key practice of any Agile software development organisation is also based on the above theory. Toyota’s production systems encouraged a culture wherein any person part of the production system could stop the production instantly on spotting a quality issue. Both Continuous Integration and Toyota’ production system recommend the following steps:

  1. The change that introduced the defect to be removed immediately
  2. Everyone on the production team to be notified of the defect
  3. The production to be stopped immediately to prevent introduction of further changes
  4. The root cause to be identified and fixed immediately

Having a smooth production system isn’t enough. It also needs to grow, and sustainably so. Ries defines sustainable growth as one where new customers come from the actions of past customers. This can happen in one or more of the following ways:

  1. Word of mouth: existing customers talk to other people who buy the product on hearing positive opinion.
  2. As a side effect of product usage: other people feel compelled to buy the product on seeing existing customers use the product or engaging with existing customers while the former are using the product.
  3. Through funded advertising: advertising is paid out of the revenue that existing customers generated and this creates a positive feedback flow in terms of revenue where the cost of advertising is less than the revenue generated by a new customer.
  4. Through repeat purchase or use: some products are inherently designed to encourage repeat purchase, for example, subscription, or consumables.

Now startups can leverage the above to form a growth strategy which Ries terms as engine of growth. Startups can employ one (or more) of the following engines of growth:

  1. The Sticky Engine of Growth: this encourages long term retention of customers (makes them stick to the product). The net growth rate is calculated as the natural growth rate minus churn rate.
  2. The Viral Engine of a Growth: this depends on person to person spread of influence (deliberately or involuntarily) like the spread of a virus. Growth is measured by a viral coefficient: the number of new customers each new customer brings.
  3. The Paid Engine of Growth: this encourages maximising the returns on each new customer by either reducing the cost of acquisition or increasing the revenue.

Continue to Part V >>

How to Raise a Lean Startup – III

<< Part II

I practice Scrum / XP in the software projects I manage. As part of the same, the team iterates over sprints during the development of the project. The sprints are kept as short as possible (usually a week long) so that feedback loop is short. At the end of each sprint, the team gets feedback on whether they built the right product or not. Likewise, for a ship cruising on the ocean, frequent checks of the current route vis-a-vis the planned route are preferred so that course corrections (if any) are short.

Ries illustrates the same spirit of short feedback loops in the diagram below:

Feedback Loop

The aim is to quickly show something to the customer so as to seek his acceptance and then move onto the next iteration to add some more features or perhaps remove some existing feature based on the feedback received. Ries puts it eloquently, “…the goal of the MVP is to begin the process of learning, not end it. Unlike a prototype or concept test, an MVP is designed not just to answer product design or technical questions. Its goal is to test fundamental business hypotheses”.

While the entrepreneur is incorporating feedback and working to improve the product, how does he ensure that he is on the right track. This is where innovation accounting comes into play. It comprises of prioritising product features, selecting a target market, and critiquing the vision in the light of market feedback. Innovation accounting has the following steps:

  1. Using an MVP to have a firm grasp on the current position of the startup
  2. Trying to move this baseline to where the startup would like to be by tuning it up
  3. Finally, arriving at a decision on whether to pivot or persevere

The entrepreneur should be careful in measuring the data lest he finds himself obtaining vanity metrics (that wrongfully depict the startup to be in a healthy or improving condition) instead of real metrics.

The above helps the entrepreneur in deciding whether to pivot or persevere. While persevering is relatively easy to understand – iterating through the build-measure-feedback loop continuously while tuning the product continuously – pivoting requires a little explanation. Pivoting doesn’t mean throwing away everything and starting from scratch. It is about reusing whatever has been built to the extent possible and then building on top of it to target customers afresh. Pivots can be of one of the following types:

Zoom-in Pivot: what was earlier only a feature of the product becomes the product now and other features are either abandoned or assume lesser significance.

Zoom-out Pivot: the product itself is insufficient and thus more features are added to it to create a new product.

Customer Segment Pivot: the product solves a real customer problem but it would better serve a different customer segment than the one it is being currently targetted to.

Customer Need Pivot: the need being solved currently is insignificant as compared to the one that can be solved without repositioning too much.

Platform Pivot: this is a pivot from selling a product that solves a particular need to let customers use it as a platform for provide similar services.

Business Architecture Pivot: this is a pivot between high margin, low volume business (usually B2B) and low margin, high volume business (usually consumer products).

Value Capture Pivot: a different feature can be monetised instead of the one currently being so.

Engine of Growth Pivot: the company can pivot among the engines of growth – viral, sticky, and paid. This usually requires a pivot in capturing value as well.

Channel Pivot: a pivot in the distribution channel of the product / service.

Technology Pivot: the company can pivot on the technology that provides a better cost advantage or better performance

Continue to Part IV >>

How to Raise a Lean Startup – II

<< Part I

Learning is the centrepiece of Lean Startup. So much so that the the progress of a Lean Startup is defined in terms of learning milestones and people are held accountable to the same rather than organising them in traditional departments and holding them accountable to individual responsibilities.

The real motive of an MVP is to generate learning. This is because more important than building something efficiently is building the right thing. No point building a great product with a greater process that no one desires. Thus learning helps in validating the hypotheses the entrepreneur makes when building his product.

This product is refined based on the feedback generates from the market through the use of MVP. In the face of this feedback, which might not always be positive, the entrepreneur, might have to decide whether to continue to work on the same product or choose a different strategy. But such decisions are less frequent than than the tuning done to the product. Even less frequent, if at all, are changes to the overarching vision with which they set out to become an entrepreneur.

Pyramid

As Ries says, “…a startup is a portfolio of activities. A lot is happening simultaneously: the engine is running, acquiring new customers and serving existing ones; we are tuning, trying to improve our product, marketing, and operations; and we are steering, deciding if and when to pivot. The challenge of entrepreneurship is to balance all these activities.

Not just startups but even existing organisations need to learn and innovate continuously in order to maintain their competitive edge or gain one. In this ever changing technological landscape, such edges get eroded very fast. Consider Blackberry that had long enjoyed the image of a premium, enterprise mobile handset company. It had two major advantages over its competitors: push mail service and Blackberry Measenger. With the rise of smartphones and their numerous apps, both these advantages were laid to waste. The result, Blackberry’s share in the market, which has already reduced to a minimum, is shrinking rapidly. The company is desperately looking for someone to buy it out but nobody wants to.

There is a trap in trying to learn what customers want. An entrepreneur should be able to distinguish between what the customer is asking for and what he really wants. This is because a lot of times customers don’t know for sure what they want. Identifying the real wants and working on the same causes the startup to grow and evolve. This is what Ries calls Validated Learning.

The question is not “Can this product be built?” In the modern economy, almost any product that can be imagined can be built. The more pertinent questions are “Should this product be built?” and “Can we build a sustainable business around this set of products and services?” – Ries

Mark Cook, Vice President of Kodak Gallery says the same thing in his own words:

  1. Do consumers recognize that they have the problem you are trying to solve?
  2. If there was a solution, would they buy it?
  3. Would they buy it from us?
  4. Can we build a solution for that problem?”

All the above is based on the cornerstone of experiments. I had read somewhere (I think it was Stephen Hawking’s “A Brief History of Time”) that an experiment cannot be considered a failure if it disproves your hypothesis. It’s a failure when it is inconclusive. Therefore even if the product fails, the experiment is still a success because we know what the customer doesn’t want.

Now, how to structure the experiment, the hypothesis. Ries considers two hypotheses to be structured: value hypothesis and growth hypothesis.

Value Hypothesis tests whether the product / service being built would actually deliver value to the customer. This hypothesis helps in answering the question would there be customers (early adopters) who would buy the initial versions of the product (MVP) and find it useful.

Growth Hypothesis tests whether the product’s purchase and usage would spread from early adopters to the masses. This helps in answering the question would the business grow from the initial success with early adopters.

Continue to Part III >>

How to Raise a Lean Startup – I

Most of you would be aware of the term “Lean”, thanks to its overuse by a lot of organisations these days (and some slimming centres too). Startup, again, is not an unfamiliar term as every new tech company calls itself a startup.

However, what you might be wondering is “what is a lean startup”. It isn’t a startup with very few employees (as almost all of them already are). It is a startup that has been founded on the principles of Lean and following an MVP (Minimum Viable Product) based approach to starting-up.

Wikipedia describes MVP as “a strategy used for fast and quantitative market testing of a product or product feature“. Also mentioned alongside is the name of Eric Ries. For those familiar with startup landscape, Ries is a celebrity. He is said to have popularised the term MVP. Some attribute the term entirely to him.

Ries has a very popular blog on Lean Startup and a site that serves as a selling platform for his bestselling book “The Lean Startup” (which doesn’t need selling though). His blog has a very interesting quote from his book:

Startup success can be engineered by following the process, which means it can be learned, which means it can be taught.

Now this is in stark contrast what some of might think or have even experienced. Startup success has been considered enigmatic, even elusive like a mirage. But here is Ries claiming that there is a definite process to it. This gives the impression that it can be synthesised as if in Chemistry lab or manufactured as if on an assembly line.

But what gives him that confidence to make such a bold statement. He attributes it to the concept of MVP or in his words, “the build-measure-learn” feedback loop. Through this loop, the entrepreneur builds a bare minimum product in order to test his assumptions / hypotheses, measures the reactions of customers thus validating / invalidating the hyotheses, and generated learnings in the process.

Ries outlines 5 principles of a lean startup:

1. Entrepreneurs are Everywhere
Ries considers anyone who fits the following definition as an entrepreneur: “a human institution designed to create new products and services under conditions of extreme uncertainty”. A person doesn’t necessarily have to work out of a garage to be one.

2. Entrepreneurship is Management
Even a startup requires management, in fact, more than traditional organisations because a startup might have bigger and frequent challenges. However, traditional management is not of much help here and one needs to think out of the box various business schools have created.

3. Validated Learning
Ries defines this as the single most important measure of progress in a startup. The startup iterates through multiple failures to arrive at success and, in the process, gains invaluable lessons that help it in the next iteration. This learning is validated by its customers who either accept or reject its products / services.

4. Innovation Accounting
As boring as accounting may be, its importance for startups cannot be overstated. Measuring progress, defining and tracking result metrics, setting up milestones are all tasks that an entrepreneur needs to fulfil zealously.

5. Build-Measure-Learn
Startups must continuously churn out products, measure their acceptance with the customers, and incorporate their feedback so as to either turn their strategy on a sixpence (pivot) or continue to push harder (persevere).

Continue to Part II >>

Data Science Delhi Meetup – 01

Data Science Delhi Meetup

This was the start of the Data Science Delhi Meetup group and the first meetup at IndicInfo’s new location at Spaze iTech Park. Despite the day falling in the middle of a long weekend, 10 enthusiasts made it to the meetup to discuss their ideas about Data Science.

The meetup comprised of multiple sessions / presentations.

Rajat Bhalla talked about the initial days of BigData as ETL and Data Warehousing technologies in which he covered the design on Enterprise Data Warehouse and how it helps large enterprises.

Anurag Shrivastava explained where data science can be applied in business by giving examples from insurance industry. Anurag also explained the importance of access to high quality historical data to help data science tools to work.

There was a Q&A session with Narinder Kumar who is a Hadoop trainer and R programmer having long Java programming experience. Narinder covered techniques such as classification and regression after explaining the meaning of machine learning at great length.

Details on the content presented are mentioned below.

Session: Data Warehouse and ETL – Rajat Bhalla

BigData has emerged recently but it has its routes in data warehouses that were prevalent since 1970s. They were handling traditional data (data collated from various transactional systems) ever since but with the arrival of new types of data (blogs, videos, data from social networks, etc.) on the horizon, the traditional warehousing and analytical approaches started to become insufficient. That is when BigData arrived on the scene and glamourised everything. But the foundation of BigData is still in data warehouses. So how about a little tour on data warehouses and ETL.

A Data Warehouse is essentially a relational database but it is different from a traditional database in a lot of ways. It usually contains historical data and is designed for query and analysis unlike a traditional database which is intended for transaction processing. The RDBMS environment in a data warehouse has two main components: and ETL solution (to be explained later in this post) and an OLAP (online analytical processing) engine which allows operations like roll-up, drill-down, slicing, dicing, etc.

A data warehouse has four main characteristics:

1. Subject oriented: data warehouses are intended to analyse data so they are build with a context (subject) in mind. For example, sales. This would help the organisation get answers to questions like what user segment purchased the maximum number of products.

2. Integrated: In order for data from disparate sources to make it to the warehouse, the data needs to be integrated with each other and all inconsistencies need to be resolved. More on this in the Transformation phase of ETL.

3. Non volatile: The data in the warehouse doesn’t undergo change. It is meant to be repository of all the data and not meant for deletion or change. A warehouse won’t be worth its name if it doesn’t have historical data.

4. Time variant: A data warehouse is supposed to accumulate data over time so that analysis of change over time can be done duly.

Architecture

Data Warehouse Architecture

The above image illustrates

  • the data sources (various transactional systems, legacy systems) that are brought together on the staging area for cleaning, transforming, integrating, etc.
  • the data warehouse which contains the raw data from the staging area, subjective summary of the same, and metadata
  • the users who run analytics and reporting tools on the warehouse

There is an alternate architecture of the above where data marts are created from the warehouse and the reporting/analytics is done on them. These data marts are usually specific to departments / line of businesses (like sales, HR, etc.) and contain only relevant data.

Extraction, Transformation, and Loading (ETL)

Broadly speaking, ETL is about extracting the relevant data from various data sources, integrating it together, and then finally populating the data warehouse with it. Let’s look at each step in a little more detail.

1. Extraction
As mentioned earlier, Extraction is the process of extracting relevant data from Data Sources for inclusion into the data warehouse. In terms of logical extraction, one of the following methods are used:

  1. Full extraction: the entire data set is extracted for further use in ETL
  2. Incremental extraction: the data set that has changed from the time of last extraction is extracted
  3. Update notification: the extraction process is notified on the data set to be extracted

In terms of physical extraction, one of the following methods are used:

  1. Online extraction: the data is extracted from the data sources while they are in use
  2. Offline extraction: the data is extracted from a copy of the data sources, typically generated using binary logs, redo logs, etc.

2. Transformation
Here the data extracted from various data sources is integrated with each other and any discrepancies are ironed out. Some typical examples would be:

  • Format revision (numeric and string formats are reconciled)
  • Decoding of fields (tacit information like M for Male and F for female is made explicit)
  • Calculated and derived values (summaries of sales calculated, age derived from data of birth etc.)
  • Splitting of single fields / Merging of information (Full name split into first and last name or vice versa)
  • Unit of measurement conversion (conversions of metres into feet, kilograms into pounds, etc.)
  • Date/Time conversion (conversion of mm-dd-yy into dd-mm-yyyy, etc.)

3. Loading
The data after extraction and due transformation is then loaded onto the warehouse. This mirrors the extraction process from a logical perspective. The data that was obtained as full extraction / incremental extraction / update notification is merged with the relevant data on the data warehouse.

ETL vs ELT

Of late, a few organisations have been experimenting with ELT instead of ETL. The reasons cited are:

  • Entire data set is is used as part of extraction and load as opposed to select data sets in ETL. This increases the width of data available for analysis and allows changes in requirements
  • Existing hardware can be used (which has already become commodity and cheap) and no specific high performance hardware is required

However, ELT tools available these days are few and organisations prefer the tried and tested route.

Business Intelligence and Data Mining

The logical next step after building a data warehouse is to leverage it to generate insights and glean knowledge out of this data. Business Intelligence uses specialised tools to allow analysts run what-if scenarios, slice/dice data to look at new paradigms, etc. Data mining, on the other hand, is more of an automated process that looks for patterns inside the data. This is used frequently in fraud detection, knowledge discovery, etc.

A frequent example of data mining is the anecdote of “beer-diapers“. It is rumoured that in the 1980, Walmart was selling beer-diapers. Its data mining system had thrown up a pattern that showed young men buying beers and diapers on Friday evenings. Walmart is said to have put the two together to result in increased sales of both. The veracity of this story is doubtful but it has been oft quoted as an example of data mining.

In the end, Rajat quoted Andrew McAfee from one of his TED Talks:

Economies don’t run on energy, capital, or labour. They run on IDEAS!

BI and Data Mining tools running on our data warehouses ensure that we never run out of ideas and are able to leverage each inflexion point in the lifecycle of an organisation.

Session: Analytics in Business – Anurag Shrivastava

Analytics has been around for more than 30 years. BigData is becoming significant due to the emergence of new types of data like blogs, photos, videos, etc.

Information about the past is available via various kinds of reports. Real-time information is available via alerts through various mediums. Combining and extrapolating the two gives us information about the future.

However, what the above is missing is “insight”. Using advanced data models, we can gain insight into what and why about past data. It helps us in generating meaningful recommendations about the present and predict / prepare for the best / worst that can happen in the future.

An example of the above is the complex algorithm running behind Amazon’s recommendation engine. The above insight capability would also help a bank identify which among its loan seekers are going to pay back the loan and which would default.

Consider the application of analytics in some industries below:

Industry Applications
Financial Services Fraud detection, Credit Scoring
Retail Promotions, Shelf Management, Demand Forecasting
Online Recommendations
Services Call Centre Staffing

Predictive Analysis and Data Mining

Predictive Analytics is a broad term describing a variety of statistical and analytical techniques used to develop models that predict future events or behaviours.

Data Mining is a component of predictive analytics that entails analysis of data to identify trends, patterns, or relationship among the data.

Consider the example of Insurance industry. Insurance companies typically prefer customers who would not file insurance claims. Even before they grant insurance cover to a customer, insurance companies can calculate the probability of the customer filing insurance claims. Based on that probability, the company can choose to cover or not cover the customer. Apart from data on the customer, external data is also used in such analytical models. For example, people living in mountainous regions or treacherous terrains would also have a higher probability to file claims.

BigData and Data Science can also help companies with Churn Prediction. It amounts to predicting whether a person would stop patronising a company, services, etc. Input to such a modelling algorithm would be the customer’s behaviour months before his subscription would end. For example,

  • Visits to price comparison sites
  • Calling call centre couple of times
  • Dissatisfied with the service
  • Questions asked on Facebook, Twitter, etc.

Another use of BigData can be in Accident Prevention. Consider the table below.

Accident Gender Age Alcohol Speed Limit Fatal
1 M Young Yes >= 100 Yes
2 M Young Yes 70 – 90 Yes
3 M Middle No 70 – 90 Yes
4 F Young No <= 60 Yes
5 M Old No 70 – 90 Yes

This is only a small subset of data that can be easily culled from past accidents. If a smart algorithm is let loose on the entire data set, it can accurately predict the scenarios and probabilities that would result in fatal accidents. The law enforcement agencies and traffic police can take adequate measures then to avert such accidents.

Similarly, banks and other financial institutions can analyse their existing data to create a model that can identify demographic and other attributes typical of loan defaulters thereby helping the institutions make better decisions when approving loan requests.

Finally, Anurag mentioned a good book on the importance of analytics: Analytics at Work by Tom Davenport. He highlighted an excerpt from the book that identifies the success factors for analytics to work in an organisation:

D for accessible, high-quality Data
E for an Enterprise orientation
L for analytical Leadership
T for strategic Targets
A for Analysts

Session 2: Question and Answer – Anurag Shrivastava and Narinder Kumar

Anurag: What is R programming language?

Narinder: R is a programming language suited primarily for statistical work and BigData. Other languages (like Java, C#, etc.) cannot suffice for this kind of work and don’t present a capability to handle the needs of BigData

Anurag: What is machine learning? How does R language support it?

Narinder: Machine learning is one of the most important aspects of data science. The program learns by itself. For example, Google marking mails a spam, recommendations by Amazon, etc. R language bridges the gap between machine learning and BigData. It helps the data scientist in identifying the right machine learning algorithm to use and actually using that.

Anurag: How difficult is R to learn for a Java / C# programmer?

Narinder: Java and c# are both Object Oriented languages while R is a functional programming language. It has a learning curve and requires a certain mindset. R is a language used primarily by statisticians and while other languages are primarily meant for programmers. The best practices of R are not as widespread as for other languages.

Anurag: Should one use purely R or work with hybrids?

Narinder: This is more of an operational decision. The idea is not to write thousands of lines of code to implement something. One should be able to arrive at a solution with minimal code. Python also works well with R but has a different ecosystem than R in terms of APIs and support. R, Python, Octave can be used but the most suited is R and the least Octave.

Anurag: What is the difference between supervised learning and unsupervised learning?

Narinder: Supervised learning is telling the computer what to learn. For example, the variables that govern the price of a house (area, locality, age of the house) are fed into the system along with values for each. The program then extrapolates the price of a house given a new set of values.
Unsupervised learning is when the program learns on its own. For example, Google News looking at trending keywords in news and organising news based on the same.

Anurag: What is Google Prediction API?

Narinder: I was a part of the beta testing of Google Prediction API. Google lets you upload your data and they provide you insights based on it. You can program your own variables and the output you are looking for. Google would let you use their infrastructure for it. It is a kind of PaaS (platform as a service). Huge data sets can be ingested into Google BigStorage and then Google BigQuery run on the same.

Anurag: What is the relationship between Hadoop and R?

Narinder: The algorithms for analytics and statistical analysis have been existing since 1980s. What is new is the large amount of data from different sources (social networks, videos, blogs, etc.) that was not existing earlier. Combine this data with traditional data and then you have the data set that is fed into BigData analytics. R brings the intelligence of the algorithms and Hadoop gives the capability to do intensive analysis on the large data set.

Detailing Backlog Appropriately

On one of the projects I am managing currently, I noticed that the product backlog was actually growing rather than shrinking as we were progressing through sprints. At one point, there were close to hundred and fifty stories in the backlog, all detailed and ready for planning. Some were even accompanied by ready UI-designs. The reason for the increasing backlog size was that everything under the sun was being thrown into it for future development. While this might sound fine (you would want to write down somewhere the features you might need in the future and what better place to write them down than the backlog), there was something definitely going wrong.

I realised that every time we would plan a new sprint, instead of picking previously written stories from the backlog, we were writing new stories as the client had come up with a new feature that had priority over rest of the backlog. This is completely understandable and even recommended. After all, you would like to use the feedback you are receiving from the market to add new features. If the window of opportunity for a new feature is now, no point putting it on a backlog for later. It requires attention now. But what about all the stories (and the effort invested in detailing them and designing the UI for them) in the backlog? Soon they would become obsolete. They would never see the light of the day. If at all their turn would come, they would require changes (in both the functionality and the UI expected) as the current functionality would have undergone a change by then. In terms of Lean thinking, this was clearly muda (“waste in Japanese). I could have done better things with the time I invested in detailing those stories and helping create their UI-design.

The other day, I was watching a show on some television channel (I think it was Comedy Central) and I noticed the way they presented their schedule:

  • Now (the show currently being telecast)
  • Later (the show after the current one)
  • Never (the show after the “later” one)

This was a fun way to provide a relative schedule, especially the “never” part. Considering that I am always in a “meta” state of mind looking at things above their current context and trying to correlate aspects from different contexts, the schedule format hit me as a solution to my backlog problem.

While the stories in the current and immediate next sprint (“Now” part of the backlog) would be detailed enough, the stories in the next two to three sprints (“Later” part of the backlog) would be relatively coarse grained. Stories even further down in the backlog (“Never” part of the backlog) would have no detail whatsoever. They could be as simple as a single line, five-word statement (similar to epics in Scrum parlance). This would help the backlog stay current and sharp, help me focus my time on more important tasks, and reduce waste. Of course, we would continue to add to the backlog any item we “might” need in the future but it would not be as detailed as before, it would simply be a reminder, like a to-do.

Follow

Get every new post delivered to your Inbox.