I​ was sure I’d fail. In my earlier attempts to cut down minutes,​ hours and days I spend on social​ media, I had managed to avoid it for some time, but ended up using it either as much or even more when I allowed it back into my life. I needed another strategy.​​

More​ than anything, I wanted to understand what I was looking for in these technologies. You see, I didn’t know. What I knew was that something wasn’t quite right. That’s when I discovered digital minimalism.​

Hiding​ in bare daylight​

According​ to a 2011 study, 47% of the U.S. adult population is estimated to have suffered from maladaptive signs of at least one addictive disorder during the last 12 months. There are several hooks available, social media being only one of them.​

Adam​  Alter, a professor of psychology and marketing at New York University and author of​ Irresistible: The Rise of Addictive​ Technology and the Business of Keeping Us Hooked describes​ each behavioural addiction to include one or many of the following elements:​

  • Compelling​ goals that are just beyond reach​
  • Irresistible and unpredictable positive feedback​
  • Sense​ of incremental progress and improvement​
  • Tasks that become slightly more difficult over time​
  • Unresolved tensions that demand resolution​
  • Strong​ social connections​

According​ to Alter, any experience that a person returns to compulsively in the short term even if it has a negative impact on a person’s well-being in at least one aspect in the long term counts as a behavioural addiction. The damage can have a mixture of social, physical​ and financial aspects. When compared to substance addictions, behavioural addictions are easier to hide which can maintain the unhealthy situation for a long time.​

Because​ behavioural addictions are common, it may be tempting to question the need for their diagnosis and normalize the situation. However, we need the diagnosis for comparing reality to what is normal and healthy. The big picture should scare the hell out of us,​ and then trigger us to action, including me as a designer. We​ at Gofore aim at developing ethically sustainable solutions. In the end, our values define what kind of impact we want to create in this world. What actually happens depends on the actions that stem from those values. Supporting sustainable technology use​ is one way of caring for humanity and taking responsibility.​

What​ is digital minimalism​

Digital​ minimalism offers building blocks for sustainable technology use. By definition, it is a philosophy of technology use in which you focus your online time on a small number of carefully selected and optimized activities that strongly support things you value,​ and then happily miss out on everything else. Cal Newport, an associate professor of computer science at Georgetown University, describes the approach in detail in​ Digital Minimalism: Choosing​ a Focused Life in a Noisy World.​

The​ idea behind digital minimalism is to increase one’s awareness of optional technologies and to help in making deliberate decisions on what to use, for what end and how. It is about throwing a strategy at something we can’t otherwise control,​ knowing that​ there are products that are addictive by design, taking more than we intend to give.​

In​ order to adopt the lifestyle of a digital minimalist, Newport suggests a rapid digital declutter process:​​

  • Take​ a 30 days break from all optional technologies in your life.​
  • During​ the break, look for activities and behaviours that you find meaningful.​
  • ​After​ 30 days, reintroduce optional technologies into your life. For each technology, evaluate the value it serves. You should allow the technology back into your life only if it serves something you deeply value, is the best way to serve this value, and has a clearly​ defined role in your life, including information on when and how you use it.​

​There​ is an option to avoiding everything, however. It is possible to create predefined rules for selected technologies which would apply during the declutter period. It would mean using a certain technology but changing something in the way to use it. If you’re​ binge watching alone, for example, you could set an episode limit and ask a friend to join you.​ 

But​ why all this trouble? The declutter period is there to help you​ to clear your mind before rushing to conclusions about the value each technology serves. But avoiding technologies for some time isn’t the hardest part. It is being honest to yourself that can be excruciating, and that happens at the very end of the declutter​ period, when you return to evaluate each technology. If the technology offers you only some value, you should let it go. At the same time, you are leaving behind that part of yourself, and farewells are always hard.​

My​ experiment​

My​ rules were simple. No Twitter, Instagram, Facebook or LinkedIn. I moved the app icons away from my phone’s main screen so that I wouldn’t touch them out of habit. As suggested by the book, I had planned activities for those moments where I’d normally reach​ for my phone. Nothing fancy there. I needed these activities the most during the first week when I had to remind myself for being in charge of the situation I had put myself into.​​

On​ the third week, a disturbing thought flickered across my mind. A sense of freedom, the kind you’d feel after an escape. But was I running from technology, or myself as a user?​ Is there a difference?​​

I​ could tell you what I did with the time that was released by avoiding technologies, but what I find far more interesting is the evaluation process that followed the declutter period. I got stuck at the very first question of the technology screening: does​ this technology directly support something that I deeply value? I simply didn’t know what those values were. That part of the big picture was gone.​

This​ was a fundamental moment. As most people, I have a narrative for each application to rationalize my usage, but there was a mismatch between my goals and behaviour I couldn’t explain away. What was most upsetting, however, was the fact that I couldn’t connect​ those goals to the values I care deeply about. The arrows pointed somewhere else. It didn’t end there, of course. I knew better ways to reach those goals, too. As a final punch, when it comes to living to my values, I have a long way ahead of me.​

After 30 days, ​I re-entered social media​. After a while, I raised my gaze and made a decision. I’m still here, somewhere beyond, and for now, I’ll stay.

Kati Virtanen

Kati Virtanen

Kati Virtanen on käyttökokemussuunnittelija, joka auttaa organisaatioita tuntemaan asiakkaansa ja selvittämään mikä heille todella on olennaista. Yhtenä metodinaan hän käyttää esimerkiksi käyttäjätestausta, jossa asiakas on aktiivisesti mukana.

Piditkö lukemastasi? Jaa se myös muille.

Viisivuotias goforelainen

Tasan viisi vuotta sitten oli ensimmäinen työpäiväni Goforella. Viisivuotiaista lapsista sanotaan, että he elävät kehityksessään suvantovaihetta. He ovat omatoimisia, aloitteellisia, tasapainoisia, rauhallisia ja oppivat monia uusia asioita. Koen eläväni tässä 5-vuotisessa työsuhteessani samanlaista suvantovaihetta. Enää en kipuile itseohjautuvuuden kanssa. Ensimmäisen vuoden ajan minulla oli paljon opettelua siinä, että sekä itse määritän että vastaan tekemisestäni. Olen kaikki nämä vuodet saanut oppia todella paljon uusia asioita, ja tällä hetkellä opin päivittäin uutta muun muassa ihanalta tiimiltämme. Olen myös saavuttanut työminässäni tietynlaista varmuutta: uskallusta ideoida ääneen, sanoa ei, ottaa aloitteita, kokeilla ja erehtyä. Rauhalliseksi en voi vielä itseäni kehua ainakaan työskentelytahtini suhteen, mutta siksi se onkin tavoitteeni vuodelle 2020.
Kuukauden työsuhteen jälkeen kirjoitin tuntemuksiani blogiin. Silloin kirjoitin mm. näin: Vuosien mittaan olen oppinut tunnistamaan kolme asiaa, jotka merkitsevät minulle työssäni eniten: hyvä työilmapiiri, luottamus ja arvostus. Mikään ei ole muuttunut. Nuo kolme asiaa muodostavat edelleen minulle hyvän työpaikan tunnusmerkit.

Hyvä työilmapiiri

Gofore on menestynyt kisoissa, joissa etsitään Suomen tai Euroopan parasta työpaikkaa, eikä suotta. Työkulttuurimme on edelleen hyvä, vaikka viidessä vuodessa olen nähnyt sen muuttuvan melkoisesti. Tullessani Goforelle olimme noin 100 hengen perheenomainen yritys, jossa lähes kaikki tunsivat toisensa ja toistensa osaamisen. Koin hyvin pian olevani osa ”vanhaa kalustoa”, kun uusia työntekijöitä aloitti sankoin joukoin joka

kuukausi. Nyt viisi vuotta myöhemmin meitä on lähes 600 asiantuntijaa 5 eri maassa. Koko yrityksen tasolla kansainvälistyminen on ehkä karistanut sitä pienen perheenomaisen yhteisön tunnetta, mutta samaan aikaan työperheemme on saanut ihania uusia jäseniä esimerkiksi Madridista. Tunnen edelleen 100 ensimmäistä työntekijää hyvin ja heidän lisäkseen saan joka viikko uusia tuttavuuksia, jotka rikastuttavat työpäiviäni.
Mitä tulee työkulttuuriin, on se asia, jossa kukaan meistä ei voi mennä yrityksen kilven taakse osoittelemaan muita ja ihmettelemään, miksi se on muuttunut vuosien aikana. Totta kai kulttuurimme on erilainen nyt, kun meillä on toimistoja iso määrä, ihmisiä entistä erilaisemmista taustoista ja kasvamme kovaa vauhtia edelleen. Silti ilmapiiri toimistoillamme on hyvä, saan nauraa töissä joka päivä, näen hymyileviä kasvoja ja ihmisistä huokuu hyvinvointi.
Uusilla, pienillä toimistoillamme, kuten Turussa, näen vahvoja pilkahduksia siitä, millainen Tampereen toimistommekin oli aloittaessani Goforella ja silloin aina tiedän, että Goforen henki on olemassa edelleen vahvana. Se muuttaa hieman muotoaan, kun toimistolla on yli 200 ihmistä ja et voikaan tuntea kaikkia tai työskennellä heistä jokaisen kanssa. Silloin apuun astuu muun muassa Gerhot, joissa samasta asiasta kiinnostuneet goforelaiset kerääntyvät jonkin asian äärelle yrityksen rahallisella tuella. Esimerkiksi suklaagerhon myötä olen yhteisillä kahvitauoillamme tutustunut sellaisiin goforelaisiin, joiden kanssa tiemme eivät koskaan työprojekteissa kohtaisi ja saanut sen kautta yhteisön, jossa olemme kuin pieni työperhe, kaikilla naama suklaassa ja nauru herkässä.

Luottamus

Yksi pääsyy päätymiseeni Goforelle oli se, että janosin enemmän vastuuta ja halusin päästä näyttämään kynteni. Olen aina viihtynyt kehitystehtävissä. Olen päässyt tekemään sitä Goforella monessakin merkityksessä.
Ensimmäiset neljä vuotta keskityin kehittämään ylläpidon tukipalveluita tarjoavan Service Centerimme toimintaa. Sen lisäksi olin mukana myös kehittämässä Tampereen toimiston viihtyvyyttä ja yrityksemme kulttuuria. Tänä vuonna olen keskittynyt kehittämään ylläpidon jatkuvien palveluidemme liiketoimintaa. Aihepiiri on sinänsä pysynyt samana, mutta katson asiaa eri kulmasta kuin aiemmin. On ollut upeaa nähdä, miten näiden vuosien aikana Service Centeristämme on kasvanut merkittävä toiminto, joka on onnistunut pitämään asiakkaat tyytyväisinä ja jossa on edelleen potentiaalia vaikka mihin.
Läpi kaikkien näiden vuosien olen saanut itse priorisoida työtäni, päättää mihin suuntaan suurimman energiani, joskus erehtyä ja oppia, usein myös onnistua ja nauttia työn tuloksista. Mikään tästä ei olisi mahdollista, jos meillä ei luotettaisi työntekijöiden asiantuntijuuteen.

Arvostus

Kissa kiitoksella elää, koira päänsilityksellä, sanoo vanha sananlasku. Pelkkä kiitos ei tuo ruokaa pöytään, mutta voi pojat, miten iso merkitys sillä on työntekijän motivaatiolle! Tuntuu ihanalta, jos vieressä istuva kollega antaa kiitosta työstäni. Tiedän hänen tuntevan tekemiseni perin pohjin ja osaavan näin myös arvioida rehellisesti, olenko siinä onnistunut vai en. Sitäkin paremmalta tuntuu, jos onnistuminen huomataan myös oman tiimin ulkopuolella. Silloin tietää, että kovan työn tulokset ovat näkyviä myös organisaation sisä-, tai mikä vielä parempaa, ulkopuolella.
Goforella meillä on vahva praise-kulttuuri. Työntekijät jakavat Slackissa auliisti kiitosta toisilleen, eikä pelkästään siellä. Hyvää mieltä ja kiitosta jaetaan myös satunnaisissa kohtaamisissa kasvokkain.
Jos nyt menisin neuvolaan 5-vuotistarkastukseen, pyydettäisiin minusta lausunto myös päivähoitajiltani. Heiltä kysyttäisiin taidoistani ja mahdollisista pulmista. Heidän lausuntonsa olisi asiantuntijakannanotto. Pyysin lausuntoja itsestäni tällaisella lopputuloksella: ”Jennan kyky pitkäjännitteiseen toimintaan on jo melko hyvä. Hän jaksaa keskittyä erilaisiin tehtäviin vähintään 30 minuutiksi. Hän on utelias ja sosiaalinen. Hän keskustelee ja kuuntelee mielellään. Hän pystyy yleensä selviämään ristiriitatilanteista ilman äärimmäisiä tunteenpurkauksia.
Tästä on hyvä jatkaa. Olen kiitollinen ensimmäisestä viidestä vuodesta ja odotan niitä tulevaksi lisää.

Jenna Salo

Jenna Salo

Jenna toimii Continous Services Leadina ja palvelupäällikkönä. Jennan jokapäiväisen työn lähtökohtana on tarjota asiakkailleen mielenrauha. Työkulttuuri on myös lähellä hänen sydäntään. Vapaa-ajallaan Jenna on kahden chihuahuan nöyrä palvelija, ja jenkkiautot ja kellohameet saavat hänet takuulla innostumaan.

Piditkö lukemastasi? Jaa se myös muille.

The Culture Code – The Secrets of Highly Successful Groups by Daniel Coyle

I want to help you to grow your mindset and share my passion for impact. Thus, in this blog series I have hand-picked the bestselling publications and essential managerial tools. This enables you to make a sustainable renewal to your business and personal life. The goal of the first season is to build a common body of knowledge and starting platform for you. By reading further you will:

  • save your scarce reading time on renewal, culture and the best performing teams
  • extend your leadership toolbox to support your business decisions
  • build your personal growth-mindset, required to excel as an evolutionary leader

Common ground

In this episode, our focus is on the extensive practical research on the best performing groups done by Daniel Coyle.

  • You get an outlook of common factors and themes of how the best performing groups operate, what makes those groups tick and how team cohesion is created.
  • You get insights on what are those verbal and physical cues of safety, vulnerability and purpose that keep these groups performing and co-operating extremely well.
  • In short, you learn what makes the best performing groups in any industry, at any time.

Any culture is always a group phenomenon, as it was reflected on in Edgar Schein´s life-time research covered in the first episode of this series. The building blocks of an organizational culture are its espoused values and daily behaviors. Therefore, no organizational culture change program should be performed, if no real clearly defined performance development challenge or problem of a group exists. Otherwise more harm than good is done throughout the organization, which is very difficult to correct later.
Coyle´s recent research was performed in the fields of education, entertainment, the military, sports, and even crime. This cross-industry organizational research pinpointed best practices of team behaviors within the Pixar and Google design teams, US Special Forces / SEALS and the San Antonio Spurs NBA basketball team. Let´s dig deeper into those verbal and physical cues that keep these groups performing and co-operating extremely well. 

Building Safety

How to build psychological safety in a group? According to Coyle group chemistry doesn´t happen by chance. As a leader you need to focus more on your listening skills and body language in different interpersonal situations. As you might have heard earlier, if you want to succeed, use your communication means (eyes, mouth and ears) in the same ratio that you have been provided with them. Think about your leadership communication – do you speak more than you listen to your team and colleagues?
Another way to make your fellow members safer in a group is to show transparency by being approachable, treating others warmly and encouraging people to participate. As it has been tested by MIT psychologists and evidenced in real-life, at Google, without a status or seniority way of working this encourages people to become closer to each other. The outcome has been to produce more innovative ideas to the market faster.
Thus, in order to feel a sense of belonging to a group there must be safety, some type of connection established, and an expected future shown. In the book there was a great example of such an environment created by the head-coach, Gregg Popovich, of the San Antonio Spurs NBA basketball team. He has been famous for being extremely rigid on the court, but very caring, thoughtful and warm outside the court. He went out of his way to find ways to show caring towards his team of coaches and players both during moments of joy and hardship. He had a high mutually inclusive respect towards his team which resulted in high motivation and consecutive successes as a unified coherent professional basketball team.

Tools for growth-minded leaders

What & why?

  • Group chemistry builds powerful connection
  • To be safe and close allows more innovation, and faster
  • Presence of safety strengthens belonging

How?

  • More listening. less talking
  • Showing transparent leadership
  • Being approachable and thankful

Sharing Vulnerability

Historically, a leader’s role in organizations has been the authority who knows everything and makes no mistakes. This is quite different to the new expected role of leadership to be vulnerable. Vulnerability in a business leadership context means to be able to admit and accept one’s own weaknesses, as well as to ask for help whenever needed. This does not happen when there is no trust towards every single member of the group.
Developing trust within a group is to open individual insecurities and weaknesses for the entire group. Many recent studies have evidenced that for a group to perform at its best, there needs to be trusted relationships present. This means in practice that as a member of a group you must be able to put your own well-being and priorities after the group´s success. You need to show a habit to develop your courage and candor. Be authentic in speaking the truth out loud and be able to listen objectively to find solutions together. Genuinely caring and showing empathy towards your group members are key competencies of a leadership growth journey which are expressed in words of ‘we’ and ‘us’, rather than ‘me’ and ‘I’.

Tools for growth-minded leaders

What & why?

  • Showing weaknesses leads to increased co-operation
  • Calmness helps in coping with stress and pressure
  • Vulnerability loop, insecurities tackled, set trust in motion within a group

How?

  • Sharing mutual weaknesses as a group, it’s the leader´s responsibility to start
  • Putting the group´s well-being over personal needs and wants
  • Developing a habit of helping others

Establishing Purpose

Purpose is the common noble cause towards which the best performing groups are heading while helping each other. Often this intent is expressed in credos which are short action and future-oriented taglines. The credo is showing everyone’s purpose in the organization, common shared identity and how success will look like. It promotes direction and togetherness.
In order to achieve a group´s purpose there needs to be proficiency and creativity simultaneously that drives the group further. Every group member must be reminded often thru a multitude of communication means, both individually, and as as a group of their sense of belonging. Ranking priorities helps to clarify focus. Acceptance and readiness to fail speeds up innovations and results.
In short, for the team to perform at its highest level, there needs to be mutual respect, trust, transparency, mutual support and internal motivation for continuous learning.

Tools for growth-minded leaders

What & why?

  • Credos describe everyone´s purpose within the group
  • Common identity and goal
  • Empathy over others comes before skills

How?

  • Sharing signals of mutual support, motivation and connectedness, often
  • Ranking business priorities in a group
  • Giving a sense of direction with readiness to fail

Secrets of highly successful groups

  1. Relationships > prioritizing harmony to build up a strong foundation and safety
  2. Authenticity > showing vulnerability creates a platform for ultimate performance
  3. Purpose > building identity by clarifying individuals’ purpose and key tasks
  4. Parallel focus > proficiency (= same quality all the time) and creativity (new things from scratch)
  5. Catchphrases & Credos > though cliché, important for common direction and sense of belonging
  6. Transparency > in information, leadership, weaknesses and mistakes
  7. Retrospectives > learning and growth approach for better results

 
Key question for you to ask yourself when becoming a leader of high performing groups
 

  • How well are you prepared to express safety, vulnerability and purpose in public?


 
The next blog will be about building cultures of freedom and responsibility. Keep following.
 
About

Jere Talonen – Your co-pilot helping you to bridge the gap between strategy, values and behaviours from the boardroom to the shop floor by combining EX with CX. In the blog series, he shares his learnings from a multi-industry international career extending over 20 years as a leader, entrepreneur, business coach & consultant, as well as an executive team and board member. Sharing is caring. Currently, Jere acts as Principal Consultant – Recoding Culture and the Future of work at Gofore Plc.
.
 

Jere Talonen

Jere Talonen

Jere työskentelee Goforella johtamis- ja palvelukulttuurin kehittämisen konsulttina. Hänellä on liiketoiminnan johtoryhmätason kokemusta globaaleista kuluttajabrändeistä yli 20 vuotta, yhdeksästä maasta ja kolmelta mantereelta. Hän on myös kokenut ekosysteemien ja verkostojen rakentamisen startup-yrittäjä.

Piditkö lukemastasi? Jaa se myös muille.

Digitalization is helping organizations and individuals build and expand their networks which leads to meaningful cooperation. Increasingly these networks are sharing time, insights and information and co-creating new business models and services. Business rules are in such constant change that regulators are struggling to keep up. To be resilient and stay relevant in this networked world, organizations need to constantly innovate new meaningful ways to communicate, interact and form relations with different participants. This does not happen from inside the company.
Understanding the wider scope, systems, value streams and relationships and how they work is a key element in driving innovations in this networked world. Many organizations claim to be customer centric however if you ask their customers the answer might be quite different. Customer surveys or ‘Happy or Not’ buttons at checkouts might give a quick impression of the organisation’s concern but this can be a false impression. Truly customer centric organizations curiously explore their customers’ holistic experiences in their world and changing contexts. To understand these, design methods such as observation techniques and contextual participatory methods are required.
The same can be said for understanding employees within the organisation. Employees know best what happens at the intersections with the external network participants they work with. Companies should never outsource their eyes and ears. Innovations do not flourish in an environment that does not listen to both their internal and external network participants.

Making the shift from company centric to customer and network-centric

Value in co-creation needs to be mutually beneficial whether it is monetary, experiential, environmental or societal. Meaningful innovations require a radical mindset shift in organizations – from company centric to customer centric and all the way to network centric. To drive innovations that are meaningful to different participants, real network centric organizations build their innovations around experiences. They try to understand people’s activities, practices and experiences in their world and in a context that extends beyond the organisation’s products and services. That is only possible when understanding individual behaviour and that isn’t easy.
People might be end-users, citizens, consumers, customers, employees, clients, partners or contributors and you need to observe them and listen to their stories, find out what is important to them in their world and in changing contexts, and find out why.

Are you company-centric or network-centric?

Using Design Thinking to facilitate constant change

Organizations that fearlessly withstand uncertainty and trust non-linear and iterative innovation processes driven by people-centric data have an advantage. The Design Thinking approach drives valuable innovations that are new to a specific context and time, creating value for all collaborative participants in a meaningful way. Innovations ultimately always need to be aligned with actual network participants’ unsatisfied and important jobs, pains and gains if they are going to be successful. This means that if an organisation`s innovation intent is not people driven, but technology and business driven, those innovations need to be validated with evidence that people really care about the innovation intent.
The powerful mindsets of design thinking guide the whole organization to break down silos and build an open, transparent and trust-building atmosphere that supports collaboration and the sharing of information and knowledge. This helps to cultivate an innovation culture that embraces the experiences of employees and external network participants.
The world in which we are living, and the future may seem foggy, but when you go out and observe the world with an open mind and with empathy, everything becomes clearer. The future does not just arrive – it is co-created within networks.



Uskotko sinä muutokseen? Siihen, että voit muuttaa maailmaa paremmaksi ihmisille ja ympäristölle? Tutustu julkaisuumme ja asiantuntijoidemme näkemyksiin: Recoding change

Marjukka Rantala

Marjukka Rantala

Marjukka on innovaattori ja liiketoimintamuotoilija, joka auttaa organisaatioita rakentamaan innovaatiokulttuuria. Hän yhdistää muotoiluajattelun ja liiketoimintaosaamisen asiakkaille räätälöityihin innovaatioprosesseihin. Yhteiskehittämällä ja kokeilemalla osallistetaan eri tahoja ja rakennetaan merkityksellisiä palveluita ja ekosysteemejä koko verkostolle, liiketoiminnalle ja lopulta yhteiskunnalle.

Piditkö lukemastasi? Jaa se myös muille.

In part 3 of my blog series on AngularJS migration, I go into fine detail on what code changes need to happen in preparation for the migration and how the actual migration is done.

Preparing your Application for Migration

Before beginning to migrate it’s necessary to prepare and align your AngularJS application with Angular. These preparation steps are all about making the code more decoupled, more maintainable, and better aligned with modern development tools.

The AngularJS Style Guide

Ensure that the current code base follows the AngularJS style guide https://github.com/johnpapa/angular-styleguide/blob/master/a1/README.md. Angular takes the best parts of AngularJS and leaves behind the not so great parts. If you build your AngularJS application in a structured way using best practices it will include the best parts and none of the bad parts making migration much easier.
The key concepts of the style guide are:

  1. One component per file. Structuring components in this way will make them easier to find and easier to migrate one at a time.
  2. Use a ’folders by feature’ structure so that different parts of the application are in their own folders and NgModules.
  3. Use Component Directives. In Angular applications are built from components an equivalent in AngularJS are Component Directives with specific attributes set, namely:
    • restrict: ’E’. Components are usually used as elements.
    • scope: {} – an isolate scope. In Angular, components are always isolated from their surroundings, and you should do this in AngularJS too.
    • bindToController: {}. Component inputs and outputs should be bound to the controller instead of using the $scope.
    • controller and controllerAs. Components have their own controllers.
    • template or templateUrl. Components have their own templates.
  4. Use a module loader like SystemJS or Webpack to import all of the components in the application rather than writing individual imports in <script> tags. This makes managing your components easier and also allows you to bundle up the application for deployment.

Migrating to Typescript

The style guide also suggests migrating to TypeScript before moving to Angular however this can also be done as you migrate each component. Information on the recommended approach can be found at https://angular.io/guide/upgrade#migrating-to-typescript however my recommendation would be to leave any migration to Typescript until you begin to migrate the AngularJS components.

Hybrid Routers

Angular Router

Angular has a new router that replaces the one in AngularJS. Both routers can’t be used at the same time but the AngularJS router can serve Angular components while you do the migration.
In order to switch to the new built-in Angular router, you must first convert all your AngularJS components to Angular. Once this is done you can switch over to the Angular router even though the application is still hosted as an AngularJS application.
In order to bring in the Angular router, you need to create a new top-level component that has the <router-outlet></router-outlet> component in it’s template. The Angular.io upgrade guide has steps to take you through this process https://angular.io/guide/upgrade#adding-the-angular-router-and-bootstrap

Angular-UI Router

UI-Router has a hybrid version that serves both AngularJS and Angular components. While migrating to Angular this hybrid version needs to be until all components and services are migrated then the new UI-Router for Angular can be used instead.
To use the hybrid version you will first need to remove angular-ui-router (or @uirouter/angularjs)from the applications package.json and add @uirouter/angular-hybrid instead.
The next step is to add the ui.router.upgrade module to your AngularJS applications dependencies:
let ng1module = angular.module(’myApp’, [’ui.router’, ’ui.router.upgrade’]);
There are some specific bootstrapping requirements to initialise the UI Hybrid Router step by step instructions are documented in the repository’s wiki https://github.com/ui-router/angular-hybrid

Implementation

Bootstrapping a Hybrid Application

In order to run AngularJS and Angular simultaneously, you need to bootstrap both versions manually. If you have automatically bootstrapped your AngularJS application using the ng-app directive then delete all references to it in the HTML template. If you are doing this in preparation for migration then manually bootstrap the AngularJS application using the angular.bootstrap function.
When bootstrapping a hybrid application you first need to bootstrap Angular and then use the upgradeModule to bootstrap AngularJS. In order to do this, you need to create an Angular application to begin migrating to! There are a number of ways to do this, the official upgrade guide suggests using the Angular Quick Start Project however you could also use the Angular CLI. If you don’t know anything about Angular versions 2 and above now is the time to get familiar with the new framework you’ll be migrating to.
Now you should have a manually bootstrapped AngularJS version and a non-bootstrapped Angular version of your application. The next step is to install the @angular/upgrade package so you can bootstrap both versions.
Run npm install @angular/upgrade –save. Create a new root module in your Angular application called app.module.ts and import the upgrade package.

import { NgModule } from '@angular/core';
import { BrowserModule } from '@angular/platform-browser';
import { UpgradeModule } from '@angular/upgrade/static';
@NgModule({
 imports: [
   BrowserModule,
   UpgradeModule
 ]
})
export class AppModule {
 constructor(private upgrade: UpgradeModule) { }
 ngDoBootstrap() {
   this.upgrade.bootstrap(document.body, ['angularJSapp'], { strictDi: true });
 }
}

This new app module is used to bootstrap the AngularJS application, replace ”angularJSapp” with the name of your AngularJS application.
Finally, update the Angular entry file (usually app.maint.ts) to bootstrap the app.module we’ve just created.
That’s it! You are now running a hybrid application. The next step is to begin converting your AngularJS Directives and Services to Angular versions. The Google walkthrough that these steps are based on can be found at https://angular.io/guide/upgrade#bootstrapping-hybrid-applications

Doing the Migration

Using Angular Components from AngularJS Code

If you are following the Horizontal Slicing method of migration mentioned earlier then you will need to use newly migrated Angular components in the AngularJS version of the application. The following examples are adapted from the official upgrade documentation for more detailed examples see https://angular.io/guide/upgrade#bootstrapping-hybrid-applications
AngularJS to Angular
Below is a simple Angular component:

import { Component } from '@angular/core';
@Component({
 selector: 'hero-detail',
 template: `
   <h2>Windstorm details!</h2>
   <div><label>id: </label>1</div>
 `
})
export class HeroDetailComponent { }

To use this in AngularJS you will first need to downgrade it using the downgradeComponent function in the upgrade package we imported earlier. This will create an AngularJS directive that can then be used in the AngularJS application.

import { HeroDetailComponent } from './hero-detail.component';
/* . . . */
import { downgradeComponent } from '@angular/upgrade/static';
angular.module('heroApp', [])
 .directive(
   'heroDetail',
   downgradeComponent({ component: HeroDetailComponent }) as angular.IDirectiveFactory
 );

The Angular component still needs to be added to the declarations in the AppModule. Because this component is being used from the AngularJS module and is an entry point into the Angular application, you must add it to the entryComponents for the NgModule.

import { HeroDetailComponent } from './hero-detail.component';
@NgModule({
 imports: [
   BrowserModule,
   UpgradeModule
 ],
 declarations: [
   HeroDetailComponent
 ],
 entryComponents: [
   HeroDetailComponent
 ]
})
export class AppModule {
 constructor(private upgrade: UpgradeModule) { }
 ngDoBootstrap() {
   this.upgrade.bootstrap(document.body, ['heroApp'], { strictDi: true });
 }
}

You can now use the heroDetail directive in any of the AngularJS templates.

Using AngularJS Component Directives from Angular Code

In most cases, you will need to use Angular components in the AngularJS application however the reverse is still possible.
AngularJS to Angular
If your components follow the component directive style described in the AngularJS style guide then it’s possible to upgrade simple components. Take the following basic component directive:

export const heroDetail = {
 template: `
   <h2>Windstorm details!</h2>
   <div><label>id: </label>1</div>
 `,
 controller: function() {
 }
};

This component can be upgraded by modifying it to extend the UpgradeComponent.

import { Directive, ElementRef, Injector, SimpleChanges } from '@angular/core';
import { UpgradeComponent } from '@angular/upgrade/static';
@Directive({
 selector: 'hero-detail'
})
export class HeroDetailDirective extends UpgradeComponent {
 constructor(elementRef: ElementRef, injector: Injector) {
   super('heroDetail', elementRef, injector);
 }
}

Now you have an Angular component based on your AngularJS component directive that can be used in your Angular application. To include it simply add it to the declarations array in app.module.ts.

app.module.ts
@NgModule({
 imports: [
   BrowserModule,
   UpgradeModule
 ],
 declarations: [
   HeroDetailDirective,
/* . . . */
 ]
})
export class AppModule {
 constructor(private upgrade: UpgradeModule) { }
 ngDoBootstrap() {
   this.upgrade.bootstrap(document.body, ['heroApp'], { strictDi: true });
 }
}

Migrating your component directives and services should now be relatively straightforward a detailed example of migrating the Angular Phone Catalogue example, which includes examples of transclusion, can be found at https://angular.io/guide/upgrade#bootstrapping-hybrid-applications
For the most part, if the AngularJS style guide has been followed then the change from component directives to components should simply be a syntax change as no internal logic should need to change. That said there are some services that are not available in Angular and so alternatives need to be found. Below is a list of some common issues that I’ve experienced when migrating AngularJS projects.

Removing $rootScope

Since $rootScope is not available in Angular, all references to it must be removed from the application. Below are solutions to most scenarios of $rootScope being used:

Removing $compile

Like $rootScope, $compile is not available in Angular so all references to it must be removed from the application. Below are solutions to most scenarios of $compile being used:

  • The DomSanitizer module from ’@angular/platform-browser’ can be used to replace $compileProvider.aHrefSanitizationWhitelist
  • $compileProvider.preAssignBindingsEnabled(true) this function is now deprecated. Components requiring bindings to be available in the constructor should be rewritten to only require bindings to be available in $onInit()
  • Replace the need for $compile(element)($scope); by utilising the Dynamic Component Loader https://angular.io/guide/dynamic-component-loader.
  • Components will need to be re written to remove $element.replaceWith().

Conclusion

In this 3 part blog, we’ve covered the reasons for migrating, the current AngularJS landscape, migration tips and resources, methods for migration, preparing for a migration, different ways of using migrated components and common architectural changes.
The goal of this blog series was to give a comprehensive guide to anyone considering migrating from AngularJS to Angular based on my experience. Hopefully, we’ve achieved this and if your problems haven’t been addressed directly in the blog the links have pointed you in the right direction. If you have any questions please post them in the comments.
AngularJS migration is not an easy task but it’s not impossible! Good preparation and planning are key and hopefully, this blog series will help you on your way.

Sources

You can read part 1 of this series here: https://gofore.com/en/migrating-from-angularjs-part-1/
And you can read part 2 here: https://gofore.com/en/migrating-from-angularjs-part-2/

Rhys Jevons

Rhys Jevons

Rhys is a Senior Software Architect with over 10 years of experience in Digital Transformation Projects in the Media, Transport and Industrial sectors. Rhys has a passion for software development and user experience and enjoys taking on complicated real-world problems.

Piditkö lukemastasi? Jaa se myös muille.

In part 2 of my blog series on AngularJS migration, I’ll discuss the different methods for migrating an application and highlight the tools and resources that make it possible.

Tools and Resources

ngMigration Assistant

In August 2018 Elana Olson from the Angular Developer Relations team at Google announced the launch of the ngMigration-Assistant. When run this command line tool will analyse a code base and produce statistics on code complexity, size and patterns used in an app. The ngMigration Assistant will then offer advice on a migration path and preparation steps to take before beginning the migration.
The goal of the ngMigration Assistant is to supply simple, clear, and constructive guidance on how to migrate an application. Here is some example output from the tool:

Complexity: 86 controllers, 57 AngularJS components, 438 JavaScript files, and 0 Typescript files.
  * App size: 151998 lines of code
  * File Count: 943 total files/folders, 691 relevant files/folders
  * AngularJS Patterns:  $rootScope, $compile, JavaScript,  .controller
Recommendation
Please follow these preparation steps in the files identified before migrating with ngUpgrade.
  * App contains $rootScope, please refactor rootScope into services.
  * App contains $compile, please rewrite compile to eliminate dynamic feature of templates.
  * App contains 438 JavaScript files that need to be converted to TypeScript.
      To learn more, visit https://angular.io/guide/upgrade#migrating-to-typescript
  * App contains 86 controllers that need to be converted to AngularJS components.
      To learn more, visit https://docs.angularjs.org/guide/component

The ngMigration Assistant tool is a great place to start when considering migrating an AngularJS project. The statistics and advice it gives will help quantify the effort the migration will take and can highlight particular patterns that will need to be addressed. Be warned that the tool doesn’t cover everything and there will be additional areas of the application external libraries and some logic for example that will need reworking during migration. It’s a good first step but not comprehensive.

ngMigration Forum

The ngMigration Forum gathers together resources, guides and tools for AngularJS migration. The forum allows developers to ask questions and get answers on their migration problems, it also collates all the common issues that occur during migration.

The angular.io Upgrade Guide

The angular.io Upgrade Guide contains a number of examples and walkthroughs on how to proceed with an AngularJS migration. Written by the Angular team the guide addresses the most common cases and has a complete example of migrating the Phone Catalogue example application.

Deciding How to Migrate

There are 3 major approaches to migrating an AngularJS application to Angular.

Complete Rewrite in Angular

The first decision to make when considering migrating your Angular application is whether you will do it incrementally or not. If you need to support an existing application or the application is too large to fully migrate in a reasonable timeframe then an incremental upgrade may be the only path open. However, if the application is small enough or if you are able to stop supporting the existing application or allocate enough resources then a complete rewrite is usually the most straightforward approach.
Migrate the whole application without supporting the AngularJS version:
Pros

  • You don’t have to worry about upgrading or downgrading components
  • No interoperability issues between AngularJS and Angular
  • Opportunity to refactor areas of the code
  • Can benefit from Angular features immediately

Cons

  • The application will be offline during the migration or you will need to copy the code base to a new repository
  • You don’t see the benefits until the whole application is migrated which could take some time depending on the overall size
  • Since you will not see the whole application running until the end of the migration you may discover issues as you build more features

Hybrid Applications

ngUpgrade

ngUpgrade is an Angular library that allows you to build a hybrid Angular application. The library can bootstrap an AngularJS application from an Angular application allowing you to mix AngularJS and Angular components inside the same application.
I will go into more detail on the ngUpgrade library in Part 3: Implementing the Migration but for now, it’s important to know that ngUpgrade allows you to upgrade AngularJS directives to run in Angular and downgrade Angular components to run in AngularJS.

Horizontal Slicing

When migrating using a Hybrid approach there are two methods that will gradually move your application from AngularJS to Angular. Each has its advantages and disadvantages which I’ll discuss next.
Horizontal Slicing is a term used to describe the method of migrating building block components first (low-level components like user inputs, date pickers etc) and then all components that use these components and so on until you have upgraded the entire component tree.
migration routesImage: Victor Savkin
The term references the way that components are migrated in slices cutting across the whole application.
Pros

  • The application can be upgraded without any downtime
  • Benefits are realised quickly as each component is migrated

Cons

  • It requires additional effort to upgrade and downgrade components

Vertical Slicing

Vertical Slicing describes the method of migrating each route or feature of the application at a time. Unlike horizontal slicing views won’t mix AngularJS and Angular components instead each view will consist entirely of components from one framework or the other. If services or components are shared across the application then they are duplicated for each version.
vertical slicingImage: Victor Savkin
Pros

  • The application can be upgraded while in production
  • Benefits are gained as each route is migrated
  • You don’t have to worry about compatibility between AngularJS and Angular components

Cons

  • It takes longer to migrate a route so benefits aren’t seen as quickly as horizontal slicing
  • Components and services may need to be duplicated if required by AngularJS and Angular versions

Effort Needed to Migrate

Which method you adopt depends entirely on your business objectives and size of the application. In most cases, I’ve found that the hybrid approach is required and more often than not I’ve used vertical slicing during the migration. Maintaining a single working application at all times has always been a priority in my experience. Since the applications have also been very large the cleanest way to organise the migration across multiple teams has been to split the application up either by feature or by route and migrate each one in turn.
The amount of effort required again depends on your particular circumstances (size of the code base, number of people etc). I’ve found that putting everyone to work on migration at once leads to confusion and in turn wasted effort. What I’ve found is that by having a small team begin the work, bootstrap the hybrid application and produce some migrated components and services the rest of the team spends left effort getting started and instead begins scaling out the migration.

Part 3: Implementing the Migration

In part 3 I’ll go into fine detail on what code changes need to happen in preparation for the migration and how the actual migration is done.

Sources

You can read part 3 of this series here: https://gofore.com/en/migration-from-angularjs-part-3/
You can read part 1 of this series here: https://gofore.com/en/migrating-from-angularjs-part-1/

Rhys Jevons

Rhys Jevons

Rhys is a Senior Software Architect with over 10 years of experience in Digital Transformation Projects in the Media, Transport and Industrial sectors. Rhys has a passion for software development and user experience and enjoys taking on complicated real-world problems.

Piditkö lukemastasi? Jaa se myös muille.

Recently, for about a year and a half, I was working as a developer on a bleeding edge, business changing, disruptive solutions project. I can not say much about the business or the customer itself, but I thought I would share some of my experiences on what and how we did things.

Our team consisted of a Scrum Master, a UI/UX designer and full-stack developers, but the whole project had multiple teams working across the globe towards common goals using a Scaled Agile Framework (SAFe). Our team’s primary focus was to implement the web UI and the higher layers of the backend stack. We also contributed to the overall design and helped with coordination between all the product owners and different teams.

One of the best things in the project was to learn and use a huge amount of different bleeding-edge open source technologies.

Frontend

The key technologies for frontend development were React and Redux, in addition to the obvious HTML5, CSS3 and JavaScript ES6. With Redux, we used redux-saga for asynchronous side-effects and also some other supporting libraries such as redux-actions and reselect. CSS was written as part of the React components using styled-components. Building and bundling of the code was done using Webpack. We also had a great experience with Storybook as a means of supporting rapid development and easy documentation of UI components.

While microservices on the backend are becoming very common, this project also used micro-frontends. This approach is rarer, but the benefits are quite similar: Different teams are able to work on different parts of the frontend independently since they are loosely coupled. New micro-frontends can also be written in different languages and using different technologies. This way switching to anew technology does not require rewriting all the existing functionality. As a technology of choice for combining the micro-frontends, we started with Single-SPA, but later switched to an iframe based approach. Using iframes made development and testing easier and improved our capabilities for continuous deployment.

This second solution turned out to work quite nicely. The only big challenge was related to showing full-screen components, such as modal dialogs. The iframe of a micro-frontend can only render content within itself. So, when it needed to open a modal dialog, it had to send a cross-window message to the top-level window, which then was able to do the actual rendering correctly on top of everything.

For frontend unit tests, we used Jest, Enzyme and Storybook snapshots, while end-to-end testing was done with TestCafe. Once again, it was seen that end-to-end tests are tricky to write – and quite a burden to maintain. Thus, choosing their scope carefully to get the best cost-value ratio is important, no matter what the tool that is used. Nevertheless, we were quite happy with TestCafe compared to available alternatives.

Backend

The backend of the system as a whole was very complex. The dozens of microservices in the lower layers were mostly done with reactive Java and they utilized, for example, event sourcing architecture. On top of those, our team built around 10 microservices with Node.js. The communication between services was mostly based on RESTful APIs, which our services implemented with either Express or Koa. In many cases, also Apache Kafka was used to pass Avro-serialized messages between services in a more asynchronous and robust manner. To provide real-time updates to the UI, we of course also used websocket connections. We learned that in some cases those Kafka messaging based approaches may work very well. Still, there is definitely also a pitfall of over-engineering and over-complexity to be avoided.

In the persistence layer we started with CouchDB as a document database, but later on, preferred using PostgreSQL relational database in most of our cases. With the latter one, we used Knex for database queries and versioned migrations, and Objection for Object-Relational Mapping. For our use cases, we did not really need any of the benefits of a document database, especially since nowadays PostgreSQL also supports json data columns to provide flexibility to the standard relational data model when needed. On the other hand, the benefits of a relational database such as better support for transactions and data migrations were important for us.

Some essential parts of the backend infrastructure were also Kong as the API gateway and Keycloak as the authorization server. Implementing complex authorization flows with OAuth 2.0, OpenID Connect and User Managed Access (UMA 2.0) was one of our major tasks in the project. Another important architectural piece, which took most of our time in the latter stages of the project, was implementing support for Open Service Broker specification.

In the backend, we used Mocha framework for unit testing but usually preferred to write the assertions with Chai. Mocking the other components and API responses were covered with Sinon and Nock. Overall, our backend stack was a success and, at least for me, a pleasure to work with.

DevOps

All the services in the project were containerized with Docker and for local development we used Docker-Compose. In production, the containers were running in OpenStack and orchestrated with Mesos and Marathon. Later on, we also started a journey in moving towards Kubernetes instead. For continuous integration and delivery, we used Gitlab CI/CD pipelines. I also liked our mandatory code reviews of every merge request. In addition to assuring the code quality, it was a very nice way to share knowledge and learn from others.

In a large scale project such as this, carefully implemented monitoring and alerting systems are, of course, essential. Different metrics were gathered to Prometheus from all the services and exposed through Grafana, while all the logs were made available in Kibana. We also used Jaeger as an implementation to OpenTracing API, which allowed us to easily trace how requests flowed between different services and what was the origin of any errors.

The main challenges were related to the fact that running such a huge project completely on a local workstation during development is impossible. We investigated a hybrid solution, where some of the services would run locally and some on a development cloud, but found no easy solution there. As the project and the number of micro-services continued to grow, we were getting close to a point where a better solution would have needed to be discovered. For the time being, we worked around the problem by just mocking some of the heavier low-level services and making sure our workstations had plenty of memory.

In summary, this was a fun and challenging project to work with. I’m sure everyone learned tons of new skills and gained a lot of confidence through this project. I want to send my biggest thanks to everyone involved!

 

YOU MAY ALSO BE INTERESTED IN THESE POSTS:

Digital transformation calls for agile strategy and culture change
5G changes everything
Fast Data

Joosa Kurvinen

Joosa Kurvinen

Joosa is an experienced fullstack software developer with a very broad skill set. He is always eager to learn and try out new things, regardless of whether that is with backend, frontend, devops or architecture. He has an agile mindset and always strives after clean and testable code. Joosa graduated from the University of Helsinki and wrote his master's thesis on AI and optimization algorithms.

Piditkö lukemastasi? Jaa se myös muille.