Introduction

Each block in your project represents a set of actions that will be performed when the block is executed.  These actions can include things like returning a voice response, playing an audio file, or even getting data from an external source.

Starting Blocks

When you create a new project, you'll start with three blocks:  Welcome, Goodbye, and Help.

You can build fully functional voice applications with just these three blocks.  Of course, you can add additional blocks as needed for your application.

Welcome Block

The Welcome block is executed when the user launches your voice application.  The Welcome block header is highlighted in green to indicate it is the starting point for your application.  You cannot delete the Welcome block.

Goodbye Block

The Goodbye block is executed when the user wants to end your application.  They might say something like "quit" or "cancel" or "goodbye".  The Goodbye block header is highlighted in red to indicate it is a stopping point of your application.

Help Block

The Help block provides the user with instructions on how to use your skill and is executed when the user says something like "help" or "help me".   The Help block header is highlighted in Yellow.

Block Configuration

When you click on a block in the canvas, you'll notice the block editor panel on the right changes to show the configuration settings for that block.

The Block Configuration Panel is broken into four tabs:  Activation, Data, Response, and Next Actions.

Activation Tab

The Activation tab has settings to control how and when the block can be executed.  For example, the Welcome block is executed when the user launches your skill and the Goodbye block is executed whenever the user says something like "goodbye" or "quit".  The Activation tab allows you to configure the block to execute at the appropriate time and in response to the appropriate triggers.

Data Tab

The Data tab allows you to create or retrieve data for use in your application.  You can create and manipulate data directly or send/receive data from external resources like Google Sheets or SendGrid.

Responses Tab

The Responses tab lets you respond back to the user.  You can respond in one of many of our built-in voices, provide an audio response, or both.  You can also send cards to the user's Alexa device and provide rich visual output for devices with a screen.

Next Actions Tab

The Next Actions tab lets you tell Voice Apps what to do next after the block has performed its actions and given its responses.   You can do things like wait for the user to give another request, execute another block, or play an audio stream.

Block Display

When looking at the blocks on the canvas, you will notice that they display the key functions that the block performs so you can quickly look at a block and tell what it does.

When you have many blocks in your project, this allows you (and your team) to quickly see how the project works.

If you hover over an icon, a small message will tell you what the icon means.

Block Execution

Individual blocks are executed in the order of the configuration tabs from left to right and then in order of their functions from top to bottom.

The Activation tab contains settings for defining when and how the block will be executed.  Once those activation conditions are met and the block is executed, the functions on the Data tab will be executed with the top-most feature executed first, then the second feature, and so on.  Then, the Responses tab will execute, with the top-most response executing first, then then next response will execute, and so on.  Finally, the Next Actions tab will execute with first Next Action being processed first.

Project Execution

When you have multiple blocks in your project, it's important to understand how those blocks get executed.

Generally, your Welcome block will be the first block to execute.  This happens when the user opens your voice application by saying "Alexa, open My Awesome Skill", for example.

The Welcome block will execute as described above, working from left to right through the configuration tabs and top to bottom on each tab. 

The Next Actions tab of the Welcome block will then determine what happens next.  If the Welcome block is configured to "Wait for the user", then your voice application will send whatever responses the Welcome block generated to the user and wait until the user responds.

If, however, the Welcome block was configured to call another block (or intent) in its Next Actions, then the other block would execute immediately. 

This chaining of blocks can continue through many different blocks until there is a Next Action that either 1) waits for the user to respond, 2) ends the session, or 3) gives an output that doesn't expect the user to respond (like playing an audio stream).


Did this answer your question?