Chatbots (II): How to make your phone read to you Hacker News (also contents)

Following the post series about Chatbots, I did an app to make Google Assistant allows you navigate using your voice throw Hacker News (Y Combinator‘s tech news forum) and listen to the body of the news in which you are more interested.

[En castellano]

If you are only interested in using it, you only have to talk to Google and say «Talk to Hacker News Reader«. However, if you want to know main points to be able to do something similar using few hours and few lines of code stay tuned: we are going to see the 6 human characteristics that are really easy to get using Dialogflow.

Dialogflow is a Google platform (formerly API.ia) that allows us to create chatbots in an easy way.

It is so powerful that allows us to do a lot of things without a single line of code.

For instance, it has integrated a natural language processing system. If you give it few training examples, it will know what our users are trying to say, driving them to different parts of our chatbot in order to give them the correct answer.

So, it will allow us to give human capacities to our chatbot in a really easy way.

1. Listening

Intents are the main component of a chatbot inside Dialogflow. It is something like a conversation unit, it is each part that our system can understand and give an answer.

Dialogflow allows us to set events and other parameters that will launch an intent. Especially, it allows writing different sentences in order to guide the chatbot. When it will detect those it will know that that is the intent it has to launch.

It also allows writing different responses that it will launch randomly avoiding you from writing code.

2. Understanding

A chatbot picking out actions based on sentences without a single line of code is great but not powerful enough. In the end, listening sentences is not the most important but understanding the concepts that are wrapped inside them.

When you are typing example phrases, you are allowed to select some words to say the platform that they are something important and that it should abstract them from single values.

At the moment in which the language understanding motor detects some of the entities that we had mapped to variables, it will extract them and send them to us as parameters in each request.

The system is ready to understand a lot of different things, but it allows us to define our own entities in order to model exactly what we want. For instance, I created an entity with different sections from Hacker News: top, new, best, ask, show and job. Then, the system can understand that a user wants that jobs uploaded to Hacker News to be read.

3. Intelligence

If intents’ answer options are not enough, we can create our very own web service to answer request understood by the system.

Google offers some libraries and using them, create a service in any language on any platform would be really easy. However, for small things like Hacker News Reader we can code right inside the platform using node.js. This code will be deployed to Firebase only with one click.

When you are thinking about things you can do you must think that coding a service (on Firebase or anywhere) you can do literally anything.

That is, you don’t need to only use APIs to access contents because you don’t have cross-origin restrictions. You have the whole Internet in your hands.

For instance, my action allows users listen to news linked from Hacker News. To do this, it downloads the web page (like a browser) and processes that to extract contents (I didn’t a hard work, it could be better).

4. Analysis

In order to use the inline editor, we have to attend some restrictions like the one that says that «dialogflowFirebaseFulfillment» must be the name of our function if we want an automated deployment.

However, thanks to Dialogflow listening and understanding, when we are giving it some intelligence we will have really easy to give analysis capacities of requests received for our chatbot.

Map each intent to functions developed by ourselves is really easy. Intents function is listening so they will say us what a user wants.

We could also access parameters understood by the system thanks to entities (understanding).

exports.dialogflowFirebaseFulfillment = functions.https.onRequest((request, response) => {
    const agent = new WebhookClient({

    function read(agent) {
        var number = agent.parameters.number || 0;

    let intentMap = new Map();
    intentMap.set('Default Welcome Intent', welcome);
    intentMap.set('Repeat', repeat);
    intentMap.set('Back', repeat);
    intentMap.set('Read', read);
    var promise = null;
    try {
        promise = agent.handleRequest(intentMap).then(function() {
            log('all done');
    } catch (e) {
    return promise;

5. Answering

To give to our chatbot answering capacity we only have to use add method from WebhookClient. We can pass as params text, user answer suggestions or rich text cards where we can embed images, use emoticons, etc.

Keep in mind that some devices where your action may be executed could not own a display or a browser. It’s important if we want a strictly conversational interface, so we should avoid visual elements to help to our bot only with words.

6. Memory

The most disgusting thing in a conversation is having to repeat every time what we are saying so, it is really important that our bot remember what user said in previous interactions of our conversation.

For this, we will use contexts. Contexts are an element managed by Dialogflow to help to choose between intents in order to launch the correct one. They could be used to know if a client device has a display available, for instance.

Their use is not very well documented, but when you debug basic methods you see that it is trivial use them to save information between each conversation turn.

    var context = agent.getContext('vars');
    var vars = context ? context.parameters : {
        ts: (new Date()).getTime() % 10000,
        items: [],
        pointer: 0,
        list: 0,
        name: 'vars',
        lifespan: 1000,
        'parameters': vars

With these 6 human capacities, you own the keys to do something similar by your own and provide a lot of functionalities to Google Assistant.

I hope they may be useful for you, the action and the information provided in this post. If yes, please share and spread the word.

We will continue with other systems that allow us to do chatbots in an easy way and with how to integrate our chatbots into other channels.

Autor: Javi López G.

Arquitecto/desarrollador, creativo, buscador de nuevas soluciones y modelos de negocio, crítico constructivo y ex muchas cosas

Thank you very much for sharing your opinion with the world

Este sitio usa Akismet para reducir el spam. Aprende cómo se procesan los datos de tus comentarios.