The Culture Code – The Secrets of Highly Successful Groups by Daniel Coyle

I want to help you to grow your mindset and share my passion for impact. Thus, in this blog series I have hand-picked the bestselling publications and essential managerial tools. This enables you to make a sustainable renewal to your business and personal life. The goal of the first season is to build a common body of knowledge and starting platform for you. By reading further you will:

  • save your scarce reading time on renewal, culture and the best performing teams
  • extend your leadership toolbox to support your business decisions
  • build your personal growth-mindset, required to excel as an evolutionary leader

Common ground

In this episode, our focus is on the extensive practical research on the best performing groups done by Daniel Coyle.

  • You get an outlook of common factors and themes of how the best performing groups operate, what makes those groups tick and how team cohesion is created.
  • You get insights on what are those verbal and physical cues of safety, vulnerability and purpose that keep these groups performing and co-operating extremely well.
  • In short, you learn what makes the best performing groups in any industry, at any time.

Any culture is always a group phenomenon, as it was reflected on in Edgar Schein´s life-time research covered in the first episode of this series. The building blocks of an organizational culture are its espoused values and daily behaviors. Therefore, no organizational culture change program should be performed, if no real clearly defined performance development challenge or problem of a group exists. Otherwise more harm than good is done throughout the organization, which is very difficult to correct later.
Coyle´s recent research was performed in the fields of education, entertainment, the military, sports, and even crime. This cross-industry organizational research pinpointed best practices of team behaviors within the Pixar and Google design teams, US Special Forces / SEALS and the San Antonio Spurs NBA basketball team. Let´s dig deeper into those verbal and physical cues that keep these groups performing and co-operating extremely well. 

Building Safety

How to build psychological safety in a group? According to Coyle group chemistry doesn´t happen by chance. As a leader you need to focus more on your listening skills and body language in different interpersonal situations. As you might have heard earlier, if you want to succeed, use your communication means (eyes, mouth and ears) in the same ratio that you have been provided with them. Think about your leadership communication – do you speak more than you listen to your team and colleagues?
Another way to make your fellow members safer in a group is to show transparency by being approachable, treating others warmly and encouraging people to participate. As it has been tested by MIT psychologists and evidenced in real-life, at Google, without a status or seniority way of working this encourages people to become closer to each other. The outcome has been to produce more innovative ideas to the market faster.
Thus, in order to feel a sense of belonging to a group there must be safety, some type of connection established, and an expected future shown. In the book there was a great example of such an environment created by the head-coach, Gregg Popovich, of the San Antonio Spurs NBA basketball team. He has been famous for being extremely rigid on the court, but very caring, thoughtful and warm outside the court. He went out of his way to find ways to show caring towards his team of coaches and players both during moments of joy and hardship. He had a high mutually inclusive respect towards his team which resulted in high motivation and consecutive successes as a unified coherent professional basketball team.

Tools for growth-minded leaders

What & why?

  • Group chemistry builds powerful connection
  • To be safe and close allows more innovation, and faster
  • Presence of safety strengthens belonging


  • More listening. less talking
  • Showing transparent leadership
  • Being approachable and thankful

Sharing Vulnerability

Historically, a leader’s role in organizations has been the authority who knows everything and makes no mistakes. This is quite different to the new expected role of leadership to be vulnerable. Vulnerability in a business leadership context means to be able to admit and accept one’s own weaknesses, as well as to ask for help whenever needed. This does not happen when there is no trust towards every single member of the group.
Developing trust within a group is to open individual insecurities and weaknesses for the entire group. Many recent studies have evidenced that for a group to perform at its best, there needs to be trusted relationships present. This means in practice that as a member of a group you must be able to put your own well-being and priorities after the group´s success. You need to show a habit to develop your courage and candor. Be authentic in speaking the truth out loud and be able to listen objectively to find solutions together. Genuinely caring and showing empathy towards your group members are key competencies of a leadership growth journey which are expressed in words of ‘we’ and ‘us’, rather than ‘me’ and ‘I’.

Tools for growth-minded leaders

What & why?

  • Showing weaknesses leads to increased co-operation
  • Calmness helps in coping with stress and pressure
  • Vulnerability loop, insecurities tackled, set trust in motion within a group


  • Sharing mutual weaknesses as a group, it’s the leader´s responsibility to start
  • Putting the group´s well-being over personal needs and wants
  • Developing a habit of helping others

Establishing Purpose

Purpose is the common noble cause towards which the best performing groups are heading while helping each other. Often this intent is expressed in credos which are short action and future-oriented taglines. The credo is showing everyone’s purpose in the organization, common shared identity and how success will look like. It promotes direction and togetherness.
In order to achieve a group´s purpose there needs to be proficiency and creativity simultaneously that drives the group further. Every group member must be reminded often thru a multitude of communication means, both individually, and as as a group of their sense of belonging. Ranking priorities helps to clarify focus. Acceptance and readiness to fail speeds up innovations and results.
In short, for the team to perform at its highest level, there needs to be mutual respect, trust, transparency, mutual support and internal motivation for continuous learning.

Tools for growth-minded leaders

What & why?

  • Credos describe everyone´s purpose within the group
  • Common identity and goal
  • Empathy over others comes before skills


  • Sharing signals of mutual support, motivation and connectedness, often
  • Ranking business priorities in a group
  • Giving a sense of direction with readiness to fail

Secrets of highly successful groups

  1. Relationships > prioritizing harmony to build up a strong foundation and safety
  2. Authenticity > showing vulnerability creates a platform for ultimate performance
  3. Purpose > building identity by clarifying individuals’ purpose and key tasks
  4. Parallel focus > proficiency (= same quality all the time) and creativity (new things from scratch)
  5. Catchphrases & Credos > though cliché, important for common direction and sense of belonging
  6. Transparency > in information, leadership, weaknesses and mistakes
  7. Retrospectives > learning and growth approach for better results

Key question for you to ask yourself when becoming a leader of high performing groups

  • How well are you prepared to express safety, vulnerability and purpose in public?


The next blog will be about building cultures of freedom and responsibility. Keep following.
Jere Talonen – Your co-pilot helping you to bridge the gap between strategy, values and behaviours from the boardroom to the shop floor by combining EX with CX. In the blog series, he shares his learnings from a multi-industry international career extending over 20 years as a leader, entrepreneur, business coach & consultant, as well as an executive team and board member. Sharing is caring. Currently, Jere acts as Principal Consultant – Recoding Culture and the Future of work at Gofore Plc.

Jere Talonen

Jere Talonen

Jere works at Gofore as a lead and service culture development consultant. He has over 20 years of management level business experience from global consumer brands in nine countries and three continents. In addition Jere is also a seasoned entrepreneur of start-up ecosystem and network building.

Linkedin profile

Do you know a perfect match? Sharing is caring

The role of organisations has been under heavy discussion over the last years. Recently more and more organisations tend to choose a new approach regarding their management control; self-management. This blog post looks at how one local Finnish service company has developed its way of finding a balance between employees’ autonomy and accountability.
Hello networked organisation
Organisations are designed to be stable and predictable environments. They have been very good at processing information, optimising processes and producing outputs. But over time, things change: customers want different services, new competitors and business models arrive, and the organisation might scale rapidly. This greater overall complexity forces organisations to fundamentally rethink their whole organisation model.
In the networked organisation model, the organisation operates as a network of small, self-directed pods that are connected by a common purpose and supported by a platform. A platform is a structure that increases the effectiveness of a community. The networked organisation cannot fit on a traditional organisation chart and is optimised by information speed and people pods.
The networked organisation enables a whole a level of flexibility and adaptiveness, that would never be possible in a divisional organisation. It can respond dynamically to change and can learn and adapt to the environment continuously. This will help the organisation to faster identify and capitalise on opportunities.
Networked organisations are also very resilient. It distributes the workload across a wider area by allowing each pod to focus on goals rather than on steps or stages. If one connection breaks, pods can still continue to work.
Networked organisation in action
Gofore plc. has around 600 employees and provides consultancy services in the fields of software development, design, management consulting and cloud. Gofore wanted to keep its organisation as simple as possible even though the growth has been rapid. The next chapters explain briefly how the networked organisation model functions at Gofore.
Gofore’s business model is consulting, so the company is eager to find new customers and deals. In this typical example, a sales person discovers an interesting invitation for tender. He contacts another sales person to discuss the details. After discussion they decide to create a bid.

The invitation to tender requires proof of concept and a team of three developers. Sales person A discusses with the sales person B and uses Gofore’s internal services to find a suitable UX-designer and a software developer for the project. Sales person B leaves the pod.

The UX-designer and software developer A start designing proof of concept. Software developer A invites two more developers B and C who would be right for the project. Sales person A also invites a legal advisor to help prepare the bid.

Software developers A, B and C fill needed resumes and help the UX-designer to finish proof of concept. The legal advisor advises software developer B on details of her resume. Sales person A and the legal advisor finalise the bid. Sales person A sends the bid to the client. Finally, the pod disappears, and people return to other pods.

A notable thing in this example is that people might not have met before. There were also no managers or a standard process of how to proceed. The pod shares the common goal to “finish the bid”. In other words, the whole pod is accountable for doing all the needed actions in order to reach the goal.
Most of the communication happens in one Slack channel and the pod might be active for only a couple of weeks. The pod goal can be everything from a small marketing event to a large strategic acquisition and it can contain employees, partners and customers. Size and activeness of the pod also varies over time. Sometimes the pod has one facilitator and sometimes multiple members are the driving forces. Occasionally, a pod fails to reach its target. Then people from the pod sometimes gather to reflect on what went wrong.
Theoretically, people can jump into different pods and take different roles at any time. On the practical level, people have varying expertise and responsibilities that restrict mobility. People who have more sales or recruitment responsibilities for example tend be more active than a single customer project focusing expert. There are also more static structures at Gofore, such as the executive management team and human resources function. Despite this, most “goforeans” are members of multiple pods simultaneously.
Side effects
Every model has side effects and the networked system is no exception. Gofore has numerous pods active every day. This might cause a situation where two pods are working on the same topic without knowing the others’ plans. Thus, a pod might be operating on an activity that has previously been done, or is already planned to be done. In my experience, this sub-optimisation risk hasn’t been a major challenge at Gofore so far.
If an employee belongs to too many pods at the same time, context switching might generate overhead and frustration. The other challenge is that people become bottlenecks – people struggle to say no for new pods and activities, even when their schedules are fully booked. For these reasons, employees need to know how a self-organising organisation works. Gofore has created internal “People Person” and “Coach” services that support employees in their self-management and personal development skills.
Some people might also miss long-term teamwork and traditional supervisors, where decision making is slower and more predictable. Gofore’s customer projects are typically long lasting which helps to create a more stable environment. Some pods such as “Guilds” and “Capabilities“ are also more durable by their nature at Gofore.
Control your own fate
This blog post offers a glimpse into Gofore’s operational level. It is important to understand that when the organisation is adaptive and learning, structures and culture also evolves continuously. Hopefully organisations understand this paradigm change and let go of the legacy of Taylorism.

The Connected Company by Dave Gray
Graphic Design
Miia Ylinen


Juhana Huotarinen

Juhana Huotarinen is a lead consultant of software development at Gofore. Juhana’s background is in software engineering and lately, he has taken part in some of the biggest digitalisation endeavours in Finland. His blogs focus on current topics, involving agile transformation, software megatrends, and work culture. Juhana follows the ‘every business is a software business’ motto.

Do you know a perfect match? Sharing is caring

fighting document chaos
Raise a hand those of you whose software project’s documentation needs updating? If you are working on a normal software project, you probably raised your hand. Typically, project documentation is outdated, contains useless information, and is scattered around. This blog post reveals the reasons for this ‘documentation chaos’ and how you can prevent it.

Reasons for The Chaos

Before we jump into the details, it’s good to understand what `documentation` means. Documentation is the information that describes the product to its users. This blog post focuses only on maintained documentation so, user stories, whiteboard drawings, PowerPoint presentations and UX-sketches are ignored.
The next question is, why software projects are facing documentation chaos? One reason is that, while only programmers see the code, documentation is visible to everybody. Therefore, documentation can be seen as a symbol of the quality of the software. Some projects` boundary conditions also require high documentation standards.
Another reason is the project lifecycle. The development team gather and document random information when a new project starts. When the deadlines are coming and budgets are running out, the team concentrates on developing software and fixing bugs and stop maintaining the documentation. In my experience, this is a very typical situation in software projects.
Universities have a very theoretical approach to software development. Courses are full of rules and regulations about how you should write documentation. In addition, courses, for example, determine at a very detailed level what documentation is needed in different phases and reference groups. This approach gives a misguided picture to students and reflects on the whole software development community.
Sometimes documentation chaos is caused by the software project structure. If the project has only a few skilled developers but many consultants and managers, documentation becomes an absolute value. The result is a massive set of documentation without working software. While documentation is visible to all, documentation costs are not.

But it’s just a piece of paper?

When the amount of documentation exceeds a certain level, documentation turns against itself. This is because documentation is valid only when it’s up to date. Documentation costs are easy to understand through this example.
Let’s have a project which needs only 200 pages of documentation in total. According to these sources (link1, link2, link3), it takes 2 – 4 hours to write a text page. We are fast writers so writing 200 pages of documentation takes us 400 hours. It’s only 53 man-days and is easily defensible.
Let’s decide that the project’s lifespan is two years. According to the same sources, it takes 1 hour on average, to revise a text page. If documentation needs to be changed on average once a month, it takes a total of 200 man-days. This time, it is more but still manageable.
Let’s have a second project which needs 2000 pages of documentation. Usually, this means that the product’s business rules are part of the documentation. On this occasion, the numbers are 530 and 2000 man-days. It’s more than 2 years non-stop work, for a team of five.

Less Documentation, Less Worries

‘If I had asked people what they wanted, they would have said faster horses’ is a famous quote by Henry Ford. Product owners and other decision makers want the top product which is fully documented. Unfortunately, documentation is expensive as the previous example showed.
The best way to prevent documentation chaos is to focus on quality over quantity. It’s better to have one page up to date than ten pages outdated. Here are some practical tips to keep documentation chaos under control:

  • Make documentation costs visible. New documentation is a new user story and will be will be prioritised against other stories
  • Don’t create documentation without the team commitment that documentation must be kept up to date
  • Keep documentation close to the source code. Comments, dynamic guides and scripts beat Word documents and wiki-pages
  • Document every business rule and screens only if someone `holds a gun to your head’
  • Most of the formal UML-models are too heavyweight for today’s software development
  • The more detailed the documentation, the greater the amount, and the bigger the risk that documentation is outdated
  • Create a culture where most design, such as sketches, drawings, and prototypes, will be disposable
  • Use images and avoid long text. Humankind has painted caves ten times longer than it has written texts

Self-caused, Self-fixed 

Integration problems, complicated customers, and wrong technologies. Many typical problems are, at least partly, coming from outside. In contrast, documentation chaos is self-caused and self-perpetuated. The positive side is that the project team has the power to stop the chaos.
My next blog post will share more concrete examples of how to improve documentation and what is my ‘Minimum Viable Documentation’.

Graphic Design

Ville Takala



Juhana Huotarinen

Juhana Huotarinen is a lead consultant of software development at Gofore. Juhana’s background is in software engineering and lately, he has taken part in some of the biggest digitalisation endeavours in Finland. His blogs focus on current topics, involving agile transformation, software megatrends, and work culture. Juhana follows the ‘every business is a software business’ motto.

Do you know a perfect match? Sharing is caring

This is part 2 of blog series on Angular 2 change detection, see first blog post for details.
As with other approaches, the change detection in Angular 2 wraps around solving the two main problems: how does the framework notice changes and how are the actual changes identified? This dive into Angular 2 change detection is divided around those two main aspects. First we will see how Angular 2 uses overridable nature of browser APIs to patch them with library called Zone.js to be able to hook into all possible sources of changes. After that, in the second part, what happens when possible change is detected will be gone through. This contains the actual identification of possible changes and process of updating the changes to the DOM based on the bindings on template defined by developer.

Who Notifies Angular 2 of Changes?

So the first problem that needs to be solved is who notifies the Angular 2 from changes that may have happened? To answer this problem we must first explore the asynchronous nature of JavaScript and see what actually can cause change in JavaScript after initial state is set. Let’s start by taking a look at how JavaScript actually works.

Asynchronous Nature of JavaScript

JavaScript is said to be asynchronous, yet single-threaded language. These are of course just fancy technical terms, but understanding them forms the foundation to see how change can happen. Let’s start with the basic, synchronous flow that many programmers coming from other languages are familiar. This flow is called imperative programming. In imperative programming each command is executed one after another in an order. Each of these commands is executed completely before proceeding to the next one. Let’s take an example program in JavaScript:

const myNumber = 100;

This is just a non-sense program that demonstrates the fact that each command is executed synchronously one by one. First we assign variable, then call some function and after that function returns, call the last one. This is the basic way many programming languages work. This is also true for JavaScript. But there is one major thing to notice about JavaScript. JavaScript is also called reactive. What this means is that things can happen asynchronously. That means there can be different kinds of events. We can subscribe for each of these events and execute some code when they occur.
Let’s take the most basic example of asynchronous nature in JavaScript: setTimeout browser API. What does setTimeout do then? As seen from the signature – setTimeout(callback, timeout) – the function takes two parameters. First one is so called callback function that is executed once certain amount of time has elapsed. The second parameter is the timeout in milliseconds. Let’s take an example of how the setTimeout can be used:

setTimeout(function () {
}, 1000);

So what we have here is a basic imperative, synchronous call to function called setTimeout. This function call is executed when this piece of code is executed (browser has loaded the script file and executes it, for example). What it does is it schedules the callback function passed to it to be called at later time. This scheduling to be executed later is what we mean by asynchronous execution. The function containing call to doSomething function is executed when one second (1000 milliseconds) has elapsed and the call stack is empty. Why the bolded part is important? Let’s take a look at it in more detail.
Call stack is the same call stack we have in other languages, such as Java. Its purpose is to keep track of the nested function calls that occur during execution of the program given. As many other languages, JavaScript is also single-threaded, meaning that only one piece of code can be executed simultaneously. The main difference here is, that unlike in many other languages, in JavaScript things can get to be executed also after the execution of actual synchronous code is already done. In Java the we would just enter the main method when the program starts, execute code within it, and when there is no more code to be executed, we are done and the program exits. This isn’t the case for JavaScript where we can schedule code to be executed later with the browser APIs. setTimeout is one of these APIs but there are many more. We can for example add event listeners with addEventListener. There are multiple types of events we can subscribe for. The most common events relate to user interaction such as mouse clicks and keyboard input. As an example click events can be subscribed for with the following code:

addEventListener('click', function () {

To summarize what kind of sources for asynchronous execution there can be in JavaScript, we can divide the APIs in three categories:

  • Time-related APIs like setTimeout and setInterval
  • HTTP responses (XmlHttpRequest)
  • Event handlers registered with addEventListener

These are the sources of asynchronous execution of code, and here’s the main thing to realize: these are the only potential sources for changes. So what if we could patch these browser APIs and track calls to them? As it turns out we can and that is exactly what we will do. This brings us to the next subject: Zones.


Zone.js is Angular’s implementation of concept of zones for JavaScript. Zones are originally a concept of Dart programming language.
So how are these so called zones then used? Let’s look at an example code that simply runs code inside a zone => {
  console.log('Hello world from zone!');

What we have here is just a simple function passed to method. This function will be executed inside the current zone. So what is the point of running the code inside a zone?
The magic comes from the possibility to hook into asynchronous events. Before we run any code inside our zone, we can add some callbacks to be called once something interesting happens. One important example of these hooks is afterTask. TheafterTask hook is executed whenever an asynchronous task has been executed. The asynchronous task simply means the callbacks registered for any of those browser APIs mentioned earlier, such as setTimeout. Let’s have an example of how this works:

  afterTask: () => console.log('Asynchronous task executed!')
}).run(() => {
  setTimeout(() => console.log('Hello world from zone!'), 1000);
// Console log:
// Hello world from zone!
// Asynchronous task executed!

There are actually quite a few of these hooks available, to show some there are enqueueTask, dequeueTask, beforeTask andonError. There is, though, a reason that we looked into afterTask especially. afterTask is key piece we need to trigger change detection in Angular 2. There is still a single twist in the story, and that is the NgZone which we’ll have a look at next.


As covered previously, we can use zone.js to execute some code when an asynchronous tasks callback is executed. We could now trigger change detection each time any asynchronous callback code has executed. There is still a simple optimization possible and that is handled by NgZone. NgZone is a class found on Angular 2 core module that extends the concept of zones a little further by keeping track of items in the asynchronous callback queue. It also defines new hook calledonMicrotaskEmpty which does the following (from the documentation):

Notifies when there is no more microtasks enqueue in the current VM Turn. This is a hint for Angular to do change detection, which may enqueue more microtasks. For this reason this event can fire multiple times per VM Turn.

So it basically allows us to only execute change detection once there is no more asynchronous callbacks to be executed instead of running the change detection after each single task. Nice!
NgZone also has some other interesting functionalities that we aren’t going to go through here. It for example allows you to run asynchronous code outside of Angular’s default zone for it not to trigger the change detection. This is especially useful when you have multiple asynchronous calls to be made sequentially and don’t want to unnecessarily trigger change detection after each of them. This this can be achieved with a method called runOutsideAngular which takes a function to be executed as parameter.

Zones & Angular 2

Now that we know what is the concept of zones and how they can be used to track asynchronous execution, we can take a look at how Angular 2 actually triggers the change detection. Let’s have a look at an example pseudo-code by Pascal Precht from his excellent article on this very same topic called Angular 2 Change Detection Explained:
  .subscribe(() => { => this.tick() })
tick() {
    .forEach((ref) => ref.detectChanges())

As we see here, the API for NgZone is a little different than the one we showed for zone.js hooks since it uses concept of observables instead of registering plain callbacks, as is usual in Angular 2. Nevertheless the concept is still the same that each time the microtask queue (the queue of those asynchronous callbacks to be executed) is empty, we call method called tick. And what the tick does is it iterates through all the change detectors in our application. Simple, yet effective. Next, let’s take a look at what these change detectors are and how they are used to detect the changes made.

Change Happened, Now What?

Great! Now we know how the Angular 2 knows about possibility of changes that may have occurred. What we need to do next is to identify what are these actual changes and after that render the changed parts to the user interface (DOM). To detect changes we first need to think a little about the structure of Angular 2 applications.

Angular 2 Application Structure

As you surely know at this point (at least implicitly), every Angular 2 application is a tree of components. The tree starts from the root component that is passed to the bootstrap method as a first parameter and is usually called AppComponent. This component then has child components through either direct references in the template or via router instantiating them within the <router-outlet></router-outlet> selector. Be that as it may, we can visualize the application as a tree structure:

We can now see that there’s the root node (AppComponent) and some subtrees beneath it symbolizing the component hierarchy of the application.

Unique Change Detector for Each Component

Important aspect of Angular 2 change detection is the fact that each component has its own change detector. These change detectors are customized for data structures of each component to be highly efficient. As seen in the image below, we can see that each component in our component tree has its own change detector.

So what makes these change detectors to unique then? We won’t be going into details on this post, but each of the change detectors is created especially for that component. This makes them extremely performant as they can be built to be something called monomorphic. This is a JavaScript virtual machine optimization that you can read more from Vyacheslav Egorov’s in-depth article What’s up with monomorphism?. This optimization lets Angular 2 change detectors run “Hundreds of thousands simple checks in a few milliseconds” according to Angular core team member Victor Savkin.
One important aspect remains to be discovered: by who and when are the change detectors then created? There are two possible ways. First and the default choice is that they are instantiated automatically by Angular 2 on application initialization. This adds some work to be done while bootstrapping the application. The second option is to use something called offline compiler, which is still work-in-progress, by Angular 2 core team to generate the change detectors through command-line interface already before shipping the application. The latter can obviously boost the booting of application even further. To find out more on the topic of offile compiler, you should see the angular2-template-generator npm package.

Change Detection Tree

Okay, now we know that each component has unique change detector responsible for detecting the changes happened since the previous rendering. How is the whole process of change detection orchestrated? By default, Angular 2 needs to be conservative about the possible changes that might have happened and to check the whole tree through each time. Possibilities to optimize this are shown later on this blog series.
The approach of Angular 2 change detection is to always perform change detection from top-to-bottom. So we start by triggering change detector on our applications root component. After this has been done, we iterate the whole tree through starting from the second level. This is also illustrated in the image below.

Doing change detection this way is predictable, performant, easy to debug and controllable. Let’s explore each of these terms a little further.
Angular 2 change detection is said to be predictable because there is no possibility for need to run change detection through multiple times for one set of changes. This is major difference compared to the Angular.js where there was no guarantee about whether change detection would be single- or multi-pass. How does Angular 2 then prevent need for multi-pass change detection? The key thing to realize is that the data only flows from the top to bottom. There can’t be any cycles in the component tree as all the data coming to component can only be from its parent through the input mechanism (@Inputannotation). This is what is meant when we say that structure of Angular 2 application is always unidirectional tree.
Only needing a single-pass combined with extremely fast change detectors (the VM-friendliness thing) for components is extremely fast. Angular core team manager Brad Green stated on his talk in ng-conf 2016 that compared to Angular.js, Angular 2 is always five times faster on rendering. This is already way more than fast enough for most of the applications. Though, if there are some corner cases performance-wise, we can still apply optimizations techniques shown later in this series to even further increase the performance of change detection. These techniques include usage of immutables and observables, or even totally manual control on when change detection is ran.
If you have done Angular.js development the odds are that you have met some really obscure error messages stating something like “10 $digest() iterations reached. Aborting!“. These problems are often really hard to reason about and thus debug. With Angular 2 change detection system they can’t really happen as running change detection itself is guaranteed not to trigger new passes.
Angular 2 change detection is also extremely controllable. We have multiple change detection strategies (gone through later in this series) to choose from for each component separately. We can also detach and re-attach the change detection manually to enable even further control.


In this blog post we saw how the JavaScript internal event loop works, and how it can be used with concept of zones to trigger change detection automatically on possible changes. We also looked into how Angular 2 manages to run change detection as single-pass, unidirectional tree.


Roope Hakulinen

As a lead software developer Roope works as team lead & software architect in projects where failure is not an option. Currently Roope is leading a project for one of the world's largest furniture retailers.

Do you know a perfect match? Sharing is caring