Six Weeks?!

Six weeks was the runway we had to rebuild a product that had taken ten years to develop, feature by feature, config by config. The custom hardware it ran on had reached end of life, and the business needed it running on Android hardware before the old platform went dark. There was no negotiating the deadline. There was only the work.

I tried to get ahead of it. I pushed to split the architecture handoff into two pieces: the networking library that everything else depended on first, and the UI and business logic second. That way part of the team could start two weeks early on the prerequisites while we waited for the rest of the spec. It was the right call, and it bought us some time.

Then the second dump arrived. Fifty pages of detailed technical specifications, with a refinement session scheduled for the following day. Not nearly enough time for anyone to build a mental model of what they were reading, let alone turn it into a coherent set of stories. We sat in front of a whiteboard with handfuls of sticky notes, trying to figure out what our story layout should even look like. We did our best. Maybe half the work got refined. With five weeks left and pressure mounting, we had to start building somewhere — refined or not. The team lead tried to finish refinement solo while the rest of the developers started chewing through the stories that were ready.

Go! Go! Go!


When the safety net got pulled

Before the first architecture dump even landed, a decision came down from above: cancel all unnecessary meetings. Concentrate. No distractions. That included 1-on-1s.

That was the decision that hurt us the most. My 1-on-1s aren’t status meetings. They’re the primary channel through which my team members tell me what they’re struggling with, where we can set goals and work toward them, and where we can pull things back before they become a crisis. Cancelling them at the very start of the most stressful project of the year didn’t help the team concentrate. It left them without a reliable way to escalate, and left me without the insight I needed to help.

All I had was the morning scrum. And by the time something surfaced there, the runway to fix it was already short.

So when problems started happening (and the incomplete refinement made sure they did) there was very little I could do about them in time. No working agreement meant no plan for pulling things back on track. Scope doubled. Last minute requirements jumped the line. Must-fix bugs as far as the eye could see.

I stayed online with the team through weekends to keep them unblocked. I was up until 2am helping debug where I could. I negotiated with product to cut even one requirement so the rest could ship on time. We were working hard instead of working smart, and we all knew it.

The team made the deadline. Ninety-five percent complete, against all odds. They went straight into bug-fixing mode the following week, chasing must-fixes and as many nice-to-haves as they could reach. And my job became documenting everything that had gone wrong, so that it would never happen again.


What I wish we’d protected

The lesson wasn’t about balancing deadlines or managing technical complexity. It was simpler than that.

When a crisis hits, the instinct is to throw out everything that feels like overhead and just run. Cancel the meetings. Skip the process. Heads down, ship it. But the things that feel like overhead in a crisis are usually the things that were keeping the team functional. The 1-on-1s. The refinement sessions. The working agreements. The feedback loops. Strip those out and you don’t get a leaner, faster team. You get a team running in the dark, working harder instead of smarter, with no early warning system and no way to course-correct.

Honestly, I wish I’d pushed back harder on the 1-on-1 decision. I understood the pressure, and I tried to work within the constraints I was given. But those conversations were load-bearing, and losing them cost us more than the time they would have taken.

Protect your Process.

Tickets Closed Isn’t the Same as Progress

When I took over the mobile team, we had just lost two directors in the space of a few months. What remained was a skeleton crew: one developer and one QA on Android, one developer and one QA on iOS. The workload ahead of us was substantial.

So at first, we hired whoever seemed capable of delivering, just to have extra pairs of hands. If someone had a track record of closing tickets, we brought them in and put them to work. It was the right call for the moment. But it created a problem I didn’t fully see until we were already living inside it.


Tickets closed isn’t the same as progress

Work was getting done. The backlog was moving. But bugs kept coming in, and nobody felt particularly responsible for them. The team had been hired to close tickets, so that’s what they measured themselves against. Code quality, maintainability, the long-term health of the codebase — those weren’t part of the deal they thought they’d signed up for.

I had to sit down with people individually and reframe the expectations. Every engineer on the team was personally responsible for the quality of code going into our repositories. That meant rigorous code reviews, for everyone’s code, including mine and the team leads’. Seniors were expected to model good code quality, but that didn’t exempt them from scrutiny. The newest hire had full freedom to flag a problem in a senior engineer’s pull request if it wasn’t up to standard.

Some people responded well to that. Others couldn’t make the shift, and we did have to let them go. That’s never an easy call, but a team where some people hold the standard and others don’t isn’t really a team. It’s just a set of individual contributors working in the same repository.


What we actually looked for

As we grew more deliberate about hiring, the technical bar stayed high but the questions got more specific. We looked for a firm grasp of the fundamentals: handling nullability, writing maintainable Kotlin, and spotting potential bugs in unfamiliar code. We’d give candidates some base code and ask them to extend it with additional parameters and optional logic, and watch how they approached it.

The question I found most revealing was simple: what’s your favorite feature of the language you work in? Most experienced developers have an answer, even if they’ve never been asked to articulate it. But I wanted the ones who could talk through it thoughtfully, the ones who lit up when they told me why they reached for it over other options. It was a window into how they thought about the language itself, and whether they were genuinely engaged with their craft.

Culture fit mattered just as much. Would this person make the team better? Would they engage in code review as a collaborative exercise rather than a defensive one? Were they someone who took ownership, or someone who was waiting to be told what to do?

HR was not always thrilled with how long our process took. But rushing a hire to fill a seat was exactly how we’d gotten into trouble before.


What the team looked like at the end

We grew from four people to over twenty. Each hire was deliberate, chosen for both technical skill and how they’d fit into and strengthen the team dynamic. Code reviews became a genuine part of the culture, with junior engineers pushing back on senior ones and knowledge transferring in both directions.

The real difference between the team at the beginning and the team at the end was ownership. People cared about the codebase because they felt like it was theirs. They took pride in the reviews, they flagged problems early, and they held each other accountable in a way that no process document could manufacture.

That shift doesn’t happen by accident. It happens because you hire people who are capable of it, hold everyone to the same standard, and make clear from the start that quality is everyone’s responsibility across the whole team and the whole development process.

For us, it showed up in the numbers. The new, dedicated team cut our production defects in half year over year and reduced our bug escape rate by 40%. That’s what a team with genuine ownership looks like in practice.

KMP is What Good Multiplatform Looks Like

When our mobile team started looking seriously at Kotlin Multiplatform, we had a people problem on our hands, not a technology one.

We had two native teams, iOS and Android, building the same business logic twice. The same data calls, the same validation rules, the same utilities, implemented separately, tested separately, and occasionally drifting apart in ways that were subtle enough to miss until a client noticed. Every time the requirements changed, we made the same update in two languages across two codebases just to keep parity.

KMP gave us a way out of that.


What we actually shared

Our first focus was two key areas where we could add consistency and predictability. The first was data processing: building shared logic to generate summary reports from repetitive data transformations. The second was network handling: a shared front-end on our APIs that kept retry logic and error handling consistent, regardless of which platform was making the call.

The trick is to keep the scope small. KMP works best when you’re sharing logic that is genuinely platform-agnostic. Threading strategies in particular deserve caution. Anyone who’s been there knows that getting clever with concurrency across platform boundaries is exactly the kind of thing that produces hard-to-diagnose bugs.

Our migration was partial and deliberate. We didn’t try to share everything at once. We identified the business logic that was most duplicated and most stable, extracted it into a shared KMP module, and let both teams integrate it incrementally.

It made the right problems easier to solve, even if it didn’t solve all of them.


The skepticism

Not everyone on the team was immediately on board. For the iOS developers in particular, the ask was a real one: learn Kotlin, a language they hadn’t been hired to write, in order to contribute to a shared codebase that was built around Android’s primary language. That’s a fair concern and we took it seriously.

In practice it went better than expected. Kotlin and Swift are remarkably similar in design. Both are modern, expressive, null-safe languages that share a lot of the same patterns and idioms. I’ve taught multiple Swift developers to work in Kotlin and not one had a real technical issue picking it up. There are even a few places where Kotlin pulled ahead of Swift — the flexibility of when compared to switch being a favorite example — but that’s a conversation for another post.

The resulting shared module reduced the surface area for cross-platform inconsistencies, and the unit test coverage on shared logic actually improved because we only had to write it once and we knew both platforms were running exactly the same code.

Unlike other cross-platform tools, KMP is built from the ground up for mobile developers. Once the iOS team started working in it, that became obvious pretty quickly.


Why it was a good decision for the business too

Kotlin Multiplatform lets you write shared Kotlin code that compiles to native binaries on each target platform. On iOS it compiles to a native binary. On Android it compiles to JVM bytecode. There is no intermediate interpreter, no bridge layer, no runtime sitting between your shared code and the platform it’s running on. KMP doesn’t ask you to abandon your existing native codebase. You introduce it where it makes sense, layer by layer, and the rest of your native code works just the way it used to.

Upper management was initially worried that KMP didn’t have enough momentum to be worth the investment. The answer has gotten clearer since then. KMP usage more than doubled among multiplatform developers in a single year, rising from 7% in 2024 to 18% in 2025. At Google I/O 2024, Google announced official support for using KMP to share business logic between Android and iOS. JetBrains has been a reliable steward of developer tooling for over twenty years, and with Google now formally behind the technology, the risk profile looks very different than it did even two years ago.

The ecosystem is still maturing, and the library coverage isn’t as broad as some alternatives yet. But for a native mobile team that wants to reduce duplication without giving up the performance, platform integration, and developer experience they’ve built expertise around, KMP is the most natural fit on the market right now.

From Senior Engineer to Director: A Story About Trust

Being a director means accepting that you can’t review every PR personally or weigh in on every architectural decision. Trying to do both produces engineers who stop thinking for themselves because they know you’ll think for them. The job is to build a team you can trust, and then trust them.

That’s harder than it sounds. As an engineer, your judgment is grounded in what you know. You can evaluate a solution because you understand the problem. But a director is regularly asked to evaluate solutions to problems that sit outside their immediate expertise. The question stops being “is this technically correct?” and starts being “do I trust the person bringing this to me, and have they earned that trust?”


How trust gets built

Trust on a technical team isn’t given, it’s built over time and effort. I track it through the things that are actually visible: a developer’s commit history in the codebase, the quality of the code reviews they give and receive, the features they’ve shipped that have passed QA without drama.

Over time, a picture forms. Some engineers consistently bring clean, well-considered work. Others bring good instincts but need more guidance. A few surface tech debt that genuinely improves quality of life for the whole team, which tells you they’re thinking beyond their own ticket.

At the same time, I’m working to earn their trust as their director. They need to know that when they bring me a real problem, I will either help solve it or find someone who can. That consistency is what makes them comfortable surfacing issues early rather than sitting on them until they become crises.


Trusting the track record

My tech lead came to me with a proposal to refactor one of our smaller apps around a state machine pattern, using Jetpack Compose. The pitch was that predictable inputs and outputs for any given screen would reduce our error rates significantly.

Jetpack Compose was still relatively new, and our team didn’t yet have deep experience making it work well in production. I couldn’t evaluate the technical specifics from first principles. What I could evaluate was his track record: consistently well-regarded by his peers, thoughtful in his reviews, right on his previous proposals. That was enough to work with.

Rather than defaulting to skepticism, I worked with him to develop the proposal into a full development plan to bring to the product team. The refactor took about two months. Since then, that app has needed only minimal updates to add new hardware support.

This project wouldn’t have gotten off the ground without trust working in both directions. He trusted me to champion something he believed in, and I trusted his judgment on how to accomplish it.


What building trust actually looks like

Trust flows naturally from the structures you put in place: code reviews that give engineers real visibility into each other’s work, training that raises the floor across the team, clear expectations about what good performance looks like. The more deliberately you build those structures, the more confidently you can trust the judgment that comes out of them.

The goal is a team where everyone is bringing their best thinking to the table, and where the director’s job is to synthesize that thinking, clear the path, and back the right ideas. When it’s working, the team can do things that no single engineer could do alone.

The Feedback Loop is a Feature

I’ve noticed that the best software teams have one thing in common: they get better over time. Features ship faster. Bugs get caught earlier. Cross-team dependencies cause less chaos than they used to. It usually comes back to the same thing. They have feedback loops, and they’ve built them deliberately.

A feedback loop is a simple system for surfacing information early enough to act on it. The earlier you catch a problem, the cheaper it is to fix. That principle scales from a single line of code all the way up to a cross-team architecture review — and the teams that apply it at every level are the ones that compound.


Small loops, big returns

The easiest feedback loops to build are the small ones inside your own team. They don’t require negotiation with other teams or sign-off from leadership. They just require someone paying attention and a process for acting on what they hear. Our team holds regular retrospectives to solicit feedback on anything the team wants to improve — tooling, process, workflow, whatever is slowing people down or making their day harder.

One of the more impactful suggestions came out of one of those sessions. The idea was simple: one analyst writes up the test cases, a second reviews them to check for gaps, and then either analyst can run the tests because both understand them fully. Analysts with different testing styles could backstop each other, which meant the test suite got more complete over time rather than reflecting the blind spots of any one person. We had a structural improvement that raised the floor on our release quality.

But feedback loops need to close quickly. Feedback that goes nowhere teaches the team that feedback doesn’t matter. Every suggestion that gets implemented, or thoughtfully declined with a reason, reinforces that the channel is real.


Structure the conversation early

The bigger the dependency, the more expensive the feedback loop becomes if you build it too late. When we were integrating against an API being built by another team, we had a chance to apply this lesson on the second version of that integration.

Before anyone wrote a line of code, my lead developer asked me to put together a step-by-step integration guide that laid out exactly how our UI expected to use the interface and what error cases we anticipated encountering. We presented that framework directly to the server architects and asked them to redline it, push back on our assumptions, and flag anything we’d gotten wrong.

The result was that a significant portion of the hard integration work got done on paper before it got done in code. Both teams understood the contract. The architects could design around our actual needs rather than their assumptions about them. And when surprises did come up, they came up in a document rather than in a sprint.

That’s what a feedback loop looks like at the cross-team scale. You’re giving the team a structured opportunity to get their needs addressed. This will surface the issues earlier, when they’re still cheap to resolve.


What makes any of this possible

Whether it’s a QA process review or a pre-integration framework document, the underlying principle is the same. Find the point where information needs to travel between two parties, and build a structure that makes that travel faster and more reliable.

But none of it works without psychological safety. The QA suggestion came from analysts who felt comfortable saying “our process has gaps.” The integration framework came from developers who felt comfortable saying “we need more structure around this before we start.” Both of those conversations require people to be willing to surface a problem before they have the solution.

That willingness only happens on a team that trusts its leadership to hear problems as information rather than complaints, and to work through them collaboratively rather than defensively. When that trust exists, your team becomes your best early warning system. They will bring you the questions, the concerns, and the half-formed ideas before they become crises — and they will help you solution your way through them.

Rubber Ducks and Hard Deadlines

Crunch time.

Every software team hits it eventually. A surprise requirement, a deadline that can’t move, a week where everyone is going to have to dig in. The question isn’t whether it happens. It’s whether your team comes out the other side intact.


When the surprise lands

In July 2024, a significant unplanned requirement came in with a firm external deadline. I was on vacation when it happened, which meant the news reached my team before it reached me. By the time I was in the loop, the decision had already been made to bring the remote team into the office to encourage in-person collaboration during the push.

There was grumbling. Of course there was. Nobody loves being asked to travel on short notice, and nobody loves having a surprise dropped on them mid-summer. That’s a completely human response to a genuinely inconvenient situation.

My job wasn’t to manage the grumbling away — it was to really listen. My job was to make sure my team knew I saw them as people first. That they weren’t just resources being deployed to solve a problem. If I could establish that clearly enough, the rest would follow.


Getting the ducks in a row

I hand-wrote a thank-you note for each team member, acknowledging their specific contributions and what I valued about working with them. I also hand-made each person their own little rubber duckie.

If you’re not familiar with rubber duck debugging, it’s a real technique — explaining your code out loud to an inanimate object helps you spot your own errors. I found a low-sew crochet pattern online, used Caron yarn, and attached a keychain loop to the top of each one so people could hang them nearby. The goal wasn’t to bribe anyone into good spirits. It was to signal, as concretely as I could, that I was thinking about each of them individually.

One team member who had recently transferred from another department seemed genuinely surprised to be singled out for appreciation. On his previous teams, crunch was just expected. Being acknowledged for showing up felt unfamiliar to him, and watching him recalibrate a little was one of the more memorable moments of that week.

My team members still have their duck friends at their desk today.


A promise on both sides

The other thing that made this crunch survivable was being firm about scope. My scrum master and I built a report that tracked the work story by story, showing where we expected to hit our targets, where we were at risk of falling behind, and what we planned to do about it. The engineers could see at a glance what the expectations were and what levers they could pull when things got tight.

That visibility mattered. When scope is defined and the finish line is real, people can push toward something. When scope keeps expanding, the finish line keeps moving, and that’s what burns people out. By holding firm on scope with the product team, we could make honest commitments to the engineers. Each side was making a promise. Each side could see what they were getting out of the deal.

We did have one last-minute addition land in the final weekend before delivery. We worked through it. But because the team trusted that we were protecting them from unnecessary scope creep wherever we could, and because they understood why this one couldn’t wait, they got through it without it feeling like a betrayal.

We hit the deadline.


What actually makes crunch survivable

It isn’t perks or free dinner. Those things aren’t bad, but they’re not the point. The point is trust that was built before the crunch ever started.

If your team knows from consistent experience that you will go to bat for them, that you notice their work, that you see them as people and not headcount — then when you need them to dig in, they will. Not because they have to. Because they want to see it through together.

One team member put it simply after that week: “Your leadership kept it bearable.” Another wrote: “Just want to say I appreciate the way you handled the situation.”

That’s what I was going for. Not heroics. Just people who felt seen, working together toward something they understood and believed in.

The ducks helped too. In more ways than one.

Code Review Doesn’t Have to Hurt

Most engineers have been there. You open a pull request and brace yourself. Three days pass. The branch drifts. When comments finally arrive, they’re nitpicky and cold, or a blunt “this seems off” with no other context.

On the other side, you’ve been tagged on a PR you don’t have full context for and you’re not sure how deep to go. You’re worried that leaving too many comments will make you look difficult, or that leaving too few will mean something slips through.

Neither of these feels like collaboration. And yet that’s exactly what code review is for: not approval, but shared ownership of what gets shipped.


The real point of the pull request

A pull request is an invitation for a fresh set of eyes to check on the health of the codebase. The engineer who wrote the code has been living inside it for days. The reality is that a second person can take a step back and provide a more objective perspective.

This means every comment has a valid place, from the humble nit to the bold “did you consider this approach?” A big architectural question might feel presumptuous coming from a junior engineer, but the newest person on the team sometimes sees the simplest solution precisely because they don’t have the context to overcomplicate it.

The goal is good code, and everything else is in service of that.


The ground rules

Good review culture doesn’t happen by accident. It needs a few agreements that everyone on the team holds to. Here are some that I think are effective:

Give your reviewers context. Write a quick summary for your pull request that describes your chosen approach and a good place to start the review to build understanding.

Move quickly. A pull request that sits open is a liability: merge conflicts accumulate, and the author’s context fades. If you submitted the PR, address comments promptly. If you were tagged as a reviewer, get to it and stay on top of any threads you open.

Remember that there’s a human on the other end of every comment thread. You want them to understand your point of view. Explain it in a way that the developer can act on.

Leave ego at the door. There’s no need to take it personally when your code gets picked apart. The review is about the code, not the person who wrote it.

Don’t resolve someone else’s comment. The commenter needs to confirm their concern was actually addressed, and taking that away from them short-circuits the whole point of the exchange.

Everyone runs through the acceptance criteria. The code author and reviewer are both responsible for confirming that the work actually does what it was supposed to do. If the criteria aren’t met, that’s the most important thing to flag, and it’s everyone’s job to catch it.


What to actually look for

Beyond the acceptance criteria, a good reviewer is looking for things the author is often too close to see. New warnings that crept in. Missing unit tests for new code paths. Files that got included accidentally. Logic that works for the happy path but breaks at the boundaries.

None of these are gotchas. They’re the kind of thing that happens to every engineer when they’ve been heads-down on a problem. Catching them in review is exactly why this is a critical part of the SDLC.


What it looks like when it’s working

When review culture is healthy, the PR comments don’t feel like an attack. They feel like a conversation. Junior engineers push back on senior ones and get taken seriously. Senior engineers leave explanations alongside their suggestions rather than just marking things wrong. Threads close quickly because everyone is paying attention. The codebase gets a little better with every merge.

More than that, the team gets better. Engineers learn from each other’s comments. Patterns propagate. Standards rise without anyone having to mandate them. The review becomes one of the primary ways the team teaches itself.

It all starts with remembering what a pull request is actually for.

How I Got Permission to Burn It All Down

There’s a class of bug that will make you feel like you’re losing your mind.

It doesn’t happen every time. It doesn’t happen on command. It happens at 2pm on a Tuesday, or right when a client is watching a demo, and then it doesn’t happen again for three days. When you try to trace it, you’re watching five things happen simultaneously, any one of which could be the culprit. The answer changes depending on timing you can’t control.

That was the iOS app I inherited.


The problem

The app handled communications with a piece of external hardware that could send and receive messages at any time. Events could originate from either the hardware or the user, often in rapid succession. The original architecture handled each incoming message by spinning up a new thread. It was a reasonable approach to the problem, given the constraints at the time.

The problem was state. If the hardware sent a message assuming a certain state, and a user action had just changed that state on a different thread, you had a crash. Or worse, silent data corruption. And it was nearly impossible to reproduce reliably. Shift the timing by milliseconds and you’d get a different result. Unit tests couldn’t catch it. Code review couldn’t catch it. It just happened, unpredictably, in production.

When the previous team director left, we brought in outside contractors to assess the situation. One of them built me a diagram showing the call stack depth for a single event in the system. It looked like a subway map with no center. There was no reliable way to know, given any starting point, where you would end up.


The contrast

Meanwhile, our Android team had a similar product, built differently. Every incoming message dropped into a queue regardless of source. One thread pulled from that queue and processed messages in order, one at a time. No collisions. No ambiguity.

I took this contrast to my manager and walked him through tracing a single bug on each platform side by side. On iOS, you had to open multiple threads simultaneously, watch shared state being updated from different directions, and try to determine which update “won” and in what order. If the timing had been slightly different, the answer would have changed too. On Android, you followed the queue, read the log linearly, and wrote a unit test that reproduced it exactly.

He brought in the CTO. We showed him the same comparison.

The choice was clear. One system was consistently crashing. The other was stable. One had bugs you couldn’t reproduce. The other had bugs you could fix.


Making the case

The key was leading with the cost of the existing system rather than the appeal of a new one. The instability wasn’t just a technical problem. It was an ongoing drain on engineering time. Every hour spent trying to reproduce a threading bug was an hour not spent building features. Every contractor brought in to untangle the architecture was money spent with no clear end in sight.

A rewrite carried real risk. But continuing as-is meant accepting a steady, open-ended cost with no path to resolution.

We got four months approved to build an MVP. We adopted a state machine model with a message queue, where all interactions were traceable, all state transitions were explicit, the logic was unit-testable, and everything was debuggable. That became the foundation for an enterprise-level library the team could iterate on, test reliably, and actually reason about.


What I learned

Two things stayed with me from this experience.

The most persuasive argument for a hard decision is usually a comparison, not an assertion. I didn’t tell leadership the architecture was bad. I showed them what debugging looked like in each world and let the contrast speak for itself.

The other thing is that determinism is underrated. Engineers spend a lot of time chasing performance and new features, and not always enough time asking whether a system is actually predictable and testable. A system that is a little slower but completely predictable will outlast a faster, chaotic one, especially as the team and codebase grow.

IRIS by Language Services Associates

Instantly connect with a qualified interpreter from your iPad or iPhone!

Your communications are secure with encrypted audio and video. Powered by the industry-leading Vidyo platform, IRIS is designed to provide a clear, high quality video and audio connection so you will feel like the interpreter is right there in the room with you!

Users must have an active account with Language Services Associates to sign in.

Download now

Tomatimo

Download Tomatimo Today!

Do you have trouble concentrating on your work for long periods of time?  Do you find yourself getting bored and distracted by the huge amount of work ahead of you?  This app can help you manage your time and allows quick breaks between tasks, preventing you from getting burned-out.

On the main page, access the settings by tapping on the “i” in the upper right corner.  Adjust the number and length of the work units you want to do today.  Between each work unit you will get a quick break (“Break Time”), and then after a few of those, you will get a longer break (“Long Break Every…” and “Long Break Time”).  Tap “Save” when you’re done, then “Start Tomo” to begin the timer.

You’ll start with a WORK unit; try to stay focused on a single task until the timer runs out.  At the end, you’ll get a notification for a PLAY unit; take a break, get a drink, whatever you need to do to clear your mind.  When the timer hits zero again, another WORK unit will start… and so on.  Guaranteeing yourself a break every half hour will help keep you from procrastinating.

The app includes audio-visual reminders when the timer moves to the next phase, as well as pings to the Notification Center if the app is in the background (configure this in your System Settings).