Dynamic Alerts for AWS IAM Configuration Changes

This blog is an improved version of this AWS blog post by Will Kruse, In his blog post, he walks us through how to set up CloudWatch alarms on IAM configuration changes. Here’s a quick overview of the setup that was suggested on how AWS usage ends up triggering a CloudWatch alarm. The users of your AWS account make calls to IAM (and other AWS services), and a record of these calls is included in your CloudTrail logs (these records are called “events”). CloudTrail publishes these log events to your CloudWatch logs. CloudWatch allows you to run a filter on these events and generate a CloudWatch metric on any matches. It’s up to you to define when these metrics trigger an alarm, but when enough events occur in a specified time period, you will receive an alert either via an SNS topic or email.

The following images illustrates the process:

The Problem:

The email alert received contains info related to the Cloudwatch metric that reached its threshold and raised an alarm but nothing about the actual log event or AWS activity which caused the alarm. You will have to go through your logs to find out what happened.

The following image shows the info contained in the email received:

Alarm Details:

- Name:                   	IAMAuthnAuthzActivityAlarm

- State Change:           	INSUFFICIENT_DATA -> ALARM

- Reason for State Change:	Threshold Crossed: 1 datapoint (1.0) was greater than or equal to the threshold (1.0).

- Timestamp:              	Monday 26 January, 2015 21:50:52 UTC

- AWS Account:            	123456789012

Threshold:

- The alarm is in the ALARM state when the metric is GreaterThanOrEqualToThreshold 1.0 for 300 seconds.

Solution:

We could improve the same setup with minute changes to receive email alerts with more insightful information like the action performed, user identity details, request parameters, IP address, mode of access, etc.

The only change we’re gonna do is instead of using a metric and Cloudwatch alarm we stream the Cloudwatch logs to a Lambda function using a subscription filter. This will allow us to pass the actual log event to the function in the event object. We could parse this information into the structure needed and then pass it to the same SNS topic or send an email alert using SES.

Step-1: Stream Cloudwatch log events to Lambda function

Let us say your Cloudtrail is logging events to /Cloudtrail/logs log group in Cloudwatch, you can stream them to a lambda like this:

Note: You can have a subscription filter pattern to make sure only IAM specific logs events are streamed to the Lambda function and ignore others.

Step-2: Add Subscription Filter Pattern and start streaming

Important: The blank spaces in filter patterns are for clarity. Also, note the use of outer curly brackets and inner parentheses.

Monitor changes to IAM:
If you are interested only in changes to your IAM account, use the following filter pattern:

{ ( ($.eventSource = "iam.amazonaws.com") && (($.eventName = "Add*") || ($.eventName = "Attach*") || ($.eventName = "Change*") || ($.eventName = "Create*") || ($.eventName = "Deactivate*") || ($.eventName = "Delete*") || ($.eventName = "Detach*") || ($.eventName = "Enable*") || ($.eventName = "Put*") || ($.eventName = "Remove*") || ($.eventName = "Set*") || ($.eventName = "Update*") || ($.eventName = "Upload*")) ) }

This filter pattern will only match events from the IAM service that begin with “Add,” “Change,” “Create,” “Deactivate,” “Delete,” “Enable,” “Put,” “Remove,” “Update,” or “Upload.” For more information about why we’re interested in APIs matching these patterns, see the IAM API Reference.

Monitor changes to authentication and authorization configuration:
If you’re interested in changes to your AWS authentication (security credentials) and authorization (policy) configuration, use the following filter pattern:

{ ( ($.eventSource = "iam.amazonaws.com") && (($.eventName = "Put*Policy") || ($.eventName = "Attach*") || ($.eventName = "Detach*") || ($.eventName = "Create*") || ($.eventName = "Update*") || ($.eventName = "Upload*") || ($.eventName = "Delete*") || ($.eventName = "Remove*") || ($.eventName = "Set*")) ) }

This filter pattern matches calls to IAM that modify policy or create, update, upload, and delete IAM elements.

After you finished with filter patterns proceed to the next step, review everything then start streaming to the lambda function.

Step-3: Lambda function code

Add this python code snippet to your lambda function and parse the log event to any structure needed. Maybe even use it to compose a readable message for convenience.

lambda_function.py

import boto3
import json
import gzip
import base64

SNS_CLIENT = boto3.client('sns')
SNS_TOPIC_ARN = 'arn:aws:sns:us-west-2:123456789012:YOURSNSTOPICNAME'

def lambda_handler(event, context):

	data_str = gzip.decompress(base64.b64decode(event['awslogs']['data']))
	data = json.loads(data)
	logs = data['logEvents']
	
	records = []
	
	for log in logs:
		
		# add parsing logic if needed
		record = json.loads(log['message'])
		records.append(record)
	
	payload = json.dumps(records)
	print(payload)
	
	# Push payload to SNS
	SNS_CLIENT.publish(
		TopicArn = SNS_TOPIC_ARN,
		Subject = 'IAM Changes Alert',
		Message = payload
	)

Conclusion

This way we can add a new layer of security to our AWS accounts, even if you don’t have a solid response time to unexpected events at least you know where to start troubleshooting.

I hope it was helpful, thank you!

This story is authored by Koushik. He is a software engineer specializing in AWS Cloud Services.

Building a data lake on AWS using Redshift Spectrum

In one of our earlier posts, we had talked about setting up a data lake using AWS LakeFormation. Once the data lake is setup, we can use Amazon Athena to query data. Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage. With Athena, there is no need for complex ETL jobs to prepare data for analysis. Today, we will explore querying the data from a data lake in S3 using Redshift Spectrum. This use case makes sense for those organizations that already have a significant exposure to using Redshift as their primary data warehouse.

Amazon Redshift Spectrum

Amazon Redshift Spectrum is used to efficiently query and retrieve structured and semistructured data from files in Amazon S3 without having to load the data into Amazon Redshift tables. Amazon Redshift Spectrum resides on dedicated Amazon Redshift servers that are independent of your cluster. Redshift Spectrum pushes many compute-intensive tasks, such as predicate filtering and aggregation, down to the Redshift Spectrum layer.

How is Amazon Athena different from Amazon Redshift Spectrum?

  1. Redshift Spectrum needs an Amazon Redshift cluster and an SQL client that’s connected to the cluster so that we can execute SQL commands. But Athena is serverless.
  2. In Redshift Spectrum the external tables are read-only, it does not support insert query. Athena supports the insert query which inserts records into S3.

Amazon Redshift cluster

To use Redshift Spectrum, you need an Amazon Redshift cluster and a SQL client that’s connected to your cluster so that you can execute SQL commands. The cluster and the data files in Amazon S3 must be in the same AWS Region.

Redshift cluster needs the authorization to access the external data catalog in AWS Glue or Amazon Athena and the data files in Amazon S3. Let’s kick off the steps required to get the Redshift cluster going.

Create an IAM Role for Amazon Redshift

  1. Open the IAM console, choose Roles.
  2. Then choose, Create role.
  3. Choose AWS service, and then select Redshift.
  4. Under Select your use case, select Redshift – Customizable and then choose Next: Permissions.
  • Then Attach permissions policy page appears. Attach the following policies AmazonS3FullAccess, AWSGlueConsoleFullAccess and AmazonAthenaFullAccess
  • For Role name, enter a name for your role, in this case, redshift-spectrum-role.
  • Choose Create role.

Create a Sample Amazon Redshift Cluster

  • Open the Amazon Redshift console.
  • Choose the AWS Region. The cluster and the data files in Amazon S3 must be in the same AWS Region.
  • Select CLUSTERS and choose Create cluster.
    Cluster Configuration:
    • Based on the size of data and type of data(compressed/uncompressed), select the nodes.
    • Amazon Redshift provides an option to calculate the best configuration of a cluster, based on the requirements. Then choose to Calculate the best configuration for your needs.
    • In this case, use dc2.large with 2 nodes.
  • Specify Cluster details.
    • Cluster identifier: Name-of-the-cluster.
    • Database port: Port number 5439 which is the default.
    • Master user name: Master user of the DB instance.
    • Master user password: Specify the password.
  • In the Cluster permissions section, select Available IAM roles and choose the IAM role that was created earlier, redshift-spectrum-role. Then choose the Add IAM role.
  • Select  Create cluster, wait till the status is Available.

Connect to Database

  1. Open the Amazon Redshift console and choose EDITOR.
  2. Database name is dev.

Create an External Schema and an External Table

External tables must be created in an external schema.

  • To create an external schema, run the following command. Please replace the iam_role with the role that was created earlier.
create external schema spectrum
from data catalog
database 'spectrumdb'
iam_role 'arn:aws:iam::xxxxxxxxxxxx:role/redshift-spectrum-role'
create external database if not exists;
  • Copy data using the following command. The data used above is provided by AWS. Configure aws cli on your machine and run this command.
aws s3 cp s3://awssampledbuswest2/tickit/spectrum/sales/ s3://bucket-name/data/source/ --recursive
  • To create an external table, please run the following command. The table is created in the spectrum.
create external table spectrum.table_name(
salesid integer,
listid integer,
sellerid integer,
buyerid integer,
eventid integer,
dateid smallint,
qtysold smallint,
saletime timestamp)
row format delimited
fields terminated by '\t'
stored as textfile
location 's3://bucket-name/copied-prefix/';

Now the table is available in Redshift Spectrum. We can analyze the data using SQL queries like so:

SELECT
 *
FROM
spectrum.rs_table
LIMIT 10;

Create a Table in Athena using Glue Crawler

In case you are just starting out on the AWS Glue crawler, I have explained how to create one from scratch in one of my earlier articles. In this case, I created the rs_table in spectrumdb database.

Comparison between Amazon Redshift Spectrum and Amazon Athena

I ran some basic queries in Athena and Redshift Spectrum as well. The query elapsed time comparison is as follows. It take about 3 seconds on Athena compared to about 16 seconds on Redshift Spectrum.

The idea behind this post was to get you up and running with a basic data lake on S3 that is queryable on Redshift Spectrum. I hope it was useful.

This story is authored by PV Subbareddy. He is a Big Data Engineer specializing on AWS Big Data Services and Apache Spark Ecosystem.

Object Detection in React Native App using TensorFlow.js

In this post, we are going to build a React Native app for detecting objects in an image using TensorFlow.js.

TensorFlow.js is a JavaScript library for training and deploying machine learning models in the browser and in Node.js. It provides many pre-trained models that ease the time-consuming task of training a new machine learning model from scratch.

Solution Architecture:

The captured image or selected from the file system is sent to API Gateway where it triggers the Lambda function which will store the image in S3 Bucket and returns the stored image URL.

Installing Dependencies:

Go to React Native Docs, select React Native CLI Quickstart. As we are building an android application, select Development OS and the Target OS as Android.

Follow the docs for installing dependencies, then create a new React Native Application. Use the command-line interface to generate a new React Native project called ObjectDetection.

npx react-native init ObjectDetection

Preparing the Android device:

We shall need an Android device to run our React Native Android app. If you have a physical Android device, you can use it for development by connecting it to your computer using a USB cable and following the instructions here.

Now go to the command line and run following command inside your React Native app directory.

cd ObjectDetection
react-native run-android

If everything is set up correctly, you should see your new app running on your physical device. Next, we need to install the react-native-image-picker package to capture or select an image. To install the package run the following command inside the project directory.

npm install react-native-image-picker --save

We would also need a few other packages as well. To install them run the following commands inside the project directory.

npm install expo-permissions --save
npm install expo-constants --save
npm install jpeg-js --save

The expo-permissions package, allows us to use prompt for various permissions to access device sensors, device cameras, etc.

The expo-constants package provides system information that remains constant throughout the lifetime of the app.

The jpeg-js package is used to decode the data from the image.

Integrating TensorFlow.js in our React Native App:

Follow this link to integrate TensorFlow.js in our React Native App. After that, we must also install @tensorflow-models/mobilenet. To install these run the following command inside the project directory.

npm install @tensorflow-models/mobilenet --save

We also need to set up an API in the AWS console and also create a Lambda function which will store the image in S3 Bucket and will return the stored image URL.

API Creation in AWS Console:

Before going further, create an API in your AWS console following Working with API Gateway paragraph in the following post:

https://medium.com/zenofai/serverless-web-application-architecture-using-react-with-amplify-part1-5b4d89f384f7

Once you are done with creating API come back to the React Native application. Go to your project directory and replace your App.js file with the following code.

App.js

import React from 'react';
import {
  StyleSheet,
  Text,
  View,
  ScrollView,
  TouchableHighlight,
  Image
} from 'react-native';
import * as tf from '@tensorflow/tfjs';
import * as mobilenet from '@tensorflow-models/mobilenet';
import { fetch } from '@tensorflow/tfjs-react-native';
import Constants from 'expo-constants';
import * as Permissions from 'expo-permissions';
import * as jpeg from 'jpeg-js';
import ImagePicker from "react-native-image-picker";
import Amplify, { API } from "aws-amplify";

Amplify.configure({
  API: {
    endpoints: [
      {
        name: "<Your-API-Name>",
        endpoint: "<Your-API-Endpoint>"
      }
    ]
  }
});

class App extends React.Component {

  state = {
    isTfReady: false,
    isModelReady: false,
    predictions: null,
    image: null,
    base64String: '',
    capturedImage: '',
    imageSubmitted: false,
    s3ImageUrl: ''
  }

  async componentDidMount() {
    // Wait for tf to be ready.
    await tf.ready();
    // Signal to the app that tensorflow.js can now be used.
    this.setState({
      isTfReady: true
    });
    this.model = await mobilenet.load();
    this.setState({ isModelReady: true });
    this.askCameraPermission();
  }

  askCameraPermission = async () => {
    if (Constants.platform.android) {
      const { status } = await Permissions.askAsync(Permissions.CAMERA_ROLL);
      if (status !== 'granted') {
        alert('Please provide camera roll permissions to make this work!');
      }
    }
  }

  imageToTensor(rawImageData) {
    const TO_UINT8ARRAY = true;
    const { width, height, data } = jpeg.decode(rawImageData, TO_UINT8ARRAY);
    // Drop the alpha channel info for mobilenet
    const buffer = new Uint8Array(width * height * 3);
    let offset = 0 ; // offset into original data
    for (let i = 0; i < buffer.length; i += 3) {
      buffer[i] = data[offset];
      buffer[i + 1] = data[offset + 1];
      buffer[i + 2] = data[offset + 2];

      offset += 4;
    }

    return tf.tensor3d(buffer, [height, width, 3]);
  }

  classifyImage = async () => {
    try {
      const imageAssetPath = this.state.s3ImageUrl;
      const response = await fetch(imageAssetPath, {}, { isBinary: true });
      const rawImageData = await response.arrayBuffer();
      const imageTensor = this.imageToTensor(rawImageData);
      const predictions = await this.model.classify(imageTensor);
      this.setState({ predictions });
    } catch (error) {
      console.log(error);
    }
  }
  
  renderPrediction = prediction => {
    return (
      <Text key={prediction.className} style={styles.text}>
        {prediction.className}
      </Text>
    )
  }

  captureImageButtonHandler = () => {
    this.setState({
      imageSubmitted: false,
      predictions: null
    });
    ImagePicker.showImagePicker({ title: "Pick an Image", maxWidth: 800, maxHeight: 600 }, (response) => {
      if (response.didCancel) {
        console.log('User cancelled image picker');
      } else if (response.error) {
        console.log('ImagePicker Error: ', response.error);
      } else if (response.customButton) {
        console.log('User tapped custom button: ', response.customButton);
      } else {
        // You can also display the image using data:
        const source = { uri: 'data:image/jpeg;base64,' + response.data };
        this.setState({ capturedImage: response.uri, base64String: source.uri });
      }
    });
  }

  submitButtonHandler = () => {
    if (this.state.capturedImage == '' || this.state.capturedImage == undefined || this.state.capturedImage == null) {
      alert("Please Capture the Image");
    } else {
      this.setState({
        imageSubmitted: true
      });
      const apiName = "<Your-API-Name>";
      const path = "<Your-API-Path>";
      const init = {
        headers: {
          'Accept': 'application/json',
          "Content-Type": "application/x-amz-json-1.1"
        },
        body: JSON.stringify({
          Image: this.state.base64String,
          name: "testImage.jpg"
        })
      }

      API.post(apiName, path, init).then(response => {
        this.setState({
          s3ImageUrl: response
        });
        { this.state.s3ImageUrl !== '' ? this.classifyImage() : '' };
      });
    }
  }

  render() {
    const { isModelReady, predictions } = this.state
    const capturedImageUri = this.state.capturedImage;
    const imageSubmittedCheck = this.state.imageSubmitted;

    return (
      <View style={styles.MainContainer}>
        <ScrollView>
          <Text style={{ fontSize: 20, color: "#000", textAlign: 'center', marginBottom: 15, marginTop: 10 }}>Object Detection</Text>

          {this.state.capturedImage !== "" && <View style={styles.imageholder} >
            <Image source={{ uri: this.state.capturedImage }} style={styles.previewImage} />
          </View>}

          {this.state.capturedImage != '' && imageSubmittedCheck && (
            <View style={styles.predictionWrapper}>
              {isModelReady && capturedImageUri && imageSubmittedCheck && (
                <Text style={styles.text}>
                  Predictions: {predictions ? '' : 'Loading...'}
                </Text>
              )}
              {isModelReady &&
                predictions &&
                predictions.map(p => this.renderPrediction(p))}
            </View>
          )
          }

          <TouchableHighlight style={[styles.buttonContainer, styles.captureButton]} onPress={this.captureImageButtonHandler}>
            <Text style={styles.buttonText}>Capture Image</Text>
          </TouchableHighlight>

          <TouchableHighlight style={[styles.buttonContainer, styles.submitButton]} onPress={this.submitButtonHandler}>
            <Text style={styles.buttonText}>Submit</Text>
          </TouchableHighlight>

        </ScrollView>
      </View>
    );
  }
}

const styles = StyleSheet.create({
  MainContainer: {
    flex: 1,
    backgroundColor: 'white',
  },
  text: {
    color: '#000000',
    fontSize: 16
  },
  predictionWrapper: {
    height: 100,
    width: '100%',
    flexDirection: 'column',
    alignItems: 'center'
  },
  buttonContainer: {
    height: 45,
    flexDirection: 'row',
    alignItems: 'center',
    justifyContent: 'center',
    marginBottom: 20,
    width: "80%",
    borderRadius: 30,
    marginTop: 20,
    marginLeft: 30,
  },
  captureButton: {
    backgroundColor: "#337ab7",
    width: 350,
  },
  buttonText: {
    color: 'white',
    fontWeight: 'bold',
  },
  submitButton: {
    backgroundColor: "#C0C0C0",
    width: 350,
    marginTop: 5,
  },
  imageholder: {
    borderWidth: 1,
    borderColor: "grey",
    backgroundColor: "#eee",
    width: "50%",
    height: 150,
    marginTop: 10,
    marginLeft: 100,
    flexDirection: 'row',
    alignItems: 'center'
  },
  previewImage: {
    width: "100%",
    height: "100%",
  }
})

export default App;

In the above code, we are configuring amplify with the API name and Endpoint URL that you created as shown below.

Amplify.configure({
 API: {
   endpoints: [
     {
       name: '<Your-API-Name>, 
       endpoint: '<Your-API-Endpoint-URL>',
     },
   ],
 },
});

The capture button will trigger the captureImageButtonHandler function. It will then ask the user to take a picture or select an image from the file system. We will store that image in the state as shown below.

captureImageButtonHandler = () => {
    this.setState({
      imageSubmitted: false,
      predictions: null
    });

    ImagePicker.showImagePicker({ title: "Pick an Image", maxWidth: 800, maxHeight: 600 }, (response) => {
      if (response.didCancel) {
        console.log('User cancelled image picker');
      } else if (response.error) {
        console.log('ImagePicker Error: ', response.error);
      } else if (response.customButton) {
        console.log('User tapped custom button: ', response.customButton);
      } else {
        const source = { uri: 'data:image/jpeg;base64,' + response.data };
        this.setState({ capturedImage: response.uri, base64String: source.uri });
      }
    });
  }

After capturing the image we will preview that image. 

By Clicking on the submit button, the submitButtonHandler function will get triggered where we will send the image to the endpoint as shown below.

submitButtonHandler = () => {
    if (this.state.capturedImage == '' || this.state.capturedImage == undefined || this.state.capturedImage == null) {
      alert("Please Capture the Image");
    } else {
      this.setState({
        imageSubmitted: true
      });
      const apiName = "<Your-API-Name>";
      const path = "<Path-to-your-API>";
      const init = {
        headers: {
          'Accept': 'application/json',
          "Content-Type": "application/x-amz-json-1.1"
        },
        body: JSON.stringify({
          Image: this.state.base64String,
          name: "testImage.jpg"
        })
      }

      API.post(apiName, path, init).then(response => {
        this.setState({
          s3ImageUrl: response
        });
        { this.state.s3ImageUrl !== '' ? this.classifyImage() : '' };
      });
    }
  }

After submitting the image, the API gateway triggers the Lambda function. The Lambda function stores the submitted image in the S3 Bucket and returns its URL. Which is then sent back in the response. The received URL is then set to the state variable and classifyImage function is called as shown above.

The classifyImage function will read the raw data from the image and yield results upon classification in the form of Predictions. The image is going to be read from S3, as we stored its URL in the state of the app component we shall use it. Similarly, the results yielded by this asynchronous method must also be saved. We are storing them in the predictions variable.

classifyImage = async () => {
    try {
      const imageAssetPath = this.state.s3ImageUrl;
      const response = await fetch(imageAssetPath, {}, { isBinary: true });
      const rawImageData = await response.arrayBuffer();
      const imageTensor = this.imageToTensor(rawImageData);
      const predictions = await this.model.classify(imageTensor);
      this.setState({ predictions });
    } catch (error) {
      console.log(error);
    }
  }

The package jpeg-js decodes the width, height, and binary data from the image inside the handler method imageToTensor, which accepts a parameter of the raw image data.

imageToTensor(rawImageData) {
    const TO_UINT8ARRAY = true;
    const { width, height, data } = jpeg.decode(rawImageData, TO_UINT8ARRAY);
    // Drop the alpha channel info for mobilenet
    const buffer = new Uint8Array(width * height * 3);
    let offset = 0 ; // offset into original data
    for (let i = 0; i < buffer.length; i += 3) {
      buffer[i] = data[offset];
      buffer[i + 1] = data[offset + 1];
      buffer[i + 2] = data[offset + 2];

      offset += 4;
    }

    return tf.tensor3d(buffer, [height, width, 3]);
  }

Here the TO_UINT8ARRAY array represents an array of 8-bit unsigned integers.

Lambda Function:

Add the below code to your Lambda function that you created earlier in your AWS Console. The below Lambda function stores the captured image in S3 Bucket and returns the URL of the image.

const AWS = require('aws-sdk');
var s3BucketName = "<Your-S3-BucketName>";
var s3Bucket = new AWS.S3( { params: {Bucket: s3BucketName, Region: "<Your-S3-Bucket-Region>"} } );

exports.handler = (event, context, callback) => {
    let parsedData = JSON.parse(event);
    let encodedImage = parsedData.Image;
    var filePath = parsedData.name;
    let buf = new Buffer(encodedImage.replace(/^data:image\/\w+;base64,/, ""),'base64');
    var data = {
        Key: filePath, 
        Body: buf,
        ContentEncoding: 'base64',
        ContentType: 'image/jpeg'
    };
    s3Bucket.putObject(data, function(err, data){
        if (err) { 
            callback(err, null);
        } else {
            var s3Url = "https://" + s3BucketName + '.' + "s3.amazonaws.com/" + filePath;
            callback(null, s3Url);
        }
    });
};

Running the App:

Run the application by executing the react-native run-android command from the terminal window. Below are the screenshots of the app running on an android device.

That’s all folks! I hope it was helpful. Any queries, please drop them in the comments section.

This story is authored by Dheeraj Kumar. He is a software engineer specializing in React Native and React based frontend development.

Scheduling tasks with AWS SQS and Lambda

In today’s blog post we will be learning a workaround for how to schedule or delay a message using AWS SQS despite its 15 minutes (900 seconds) upper limit.

But first let us understand some SQS attributes briefly, firstly Delivery Delay, it lets you specify a delay between 0 and 900 seconds (15 minutes). When set, any message sent to the queue will only become visible to consumers after the configured delay period. Secondly Visibility Timeout, the time that a received message from a queue will be invisible to be received again unless it’s deleted from the queue.

If you want to learn about dead letter queue and deduplication, you could follow my other article: Processing High Volume Big Data Concurrently with No Duplicates using AWS SQS.

So, when a consumer receives a message, the message remains in the queue but is invisible for the duration of its visibility timeout, after which other consumers will be able to see the message. Ideally, the first consumer would handle and delete the message before the visibility timeout expires.

The upper limit for visibility timeout is 12 hours. We could leverage this to schedule/delay a task.

A typical combination would be SQS with Lambda where the invoked function executes the task. Usually, standard queues when enabled with lambda triggers have immediate consumption that means when a message is inserted into the standard queue the lambda function is invoked immediately with the message available in the event object. 

Note: If the lambda results in an error the message stays in the queue for further receive requests, otherwise it is deleted.

That said, there could be 2 cases:

  1. A generic setup that can adapt to a range of time delays.
  2. A stand-alone setup built to handle only a fixed time delay.

The idea is to insert a message into the queue with task details and time to execute(target time) and have the lambda do the dirty work.

Case1: 

The Lambda function checks if target time equals current time, if so execute the task and message is deleted as the lambda executes without error else change the visibility timeout of that message in the queue with delta difference and raise an error leaving the message in the queue.

Case2:

The SQS’s default visibility timeout is configured with the required fixed time delay. The Lambda function checks if the difference of target time and current time equals fixed time delay, if so execute the task and message is deleted as the lambda executes without error else simply raise an error leaving the message untampered back in the queue.

The message is retried after it’s visibility timeout which is the required fixed time delay and is executed.

The problem with this approach is accuracy and scalability.

Here’s the lambda code for case2:
Processor.py

import boto3
import json
from datetime import datetime, timezone
import dateutil.tz

tz = dateutil.tz.gettz('US/Central')

fixed_time_delay = 1 # change this value, be it hour, min, sec

def lambda_handler(event, context):
    # TODO implement
    
    message = event['Records'][0]
    # print(message)
    result = json.loads(message['body'])
    
    task_details = result['task_details']
    target_time = result['execute_at']
    
    tt = datetime.strptime(target_time, "%d/%m/%Y, %H:%M %p CST")
    print(tt)
    
    t_now = datetime.now(tz)
    time_now = t_now.strftime("%d/%m/%Y, %H:%M %p CST")
    tn = datetime.strptime(time_now, "%d/%m/%Y, %H:%M %p CST")
    print(tn)
    
    delta_time = tn-tt
    print(delta_time)

    delta_in_whatever = #extract delay in hour, min, sec 
    
    if delta_in_whatever == fixed_time_delay: 
        # execute task logic
        print(task_details)
    else:
        raise e

Conclusion:
Scheduling tasks using SQS isn’t effective in all scenarios. You could use AWS step function’s wait state to achieve milliseconds accuracy, or Dynamo DB’s TTL feature to build an ad hoc scheduling mechanism, the choice of service used is largely dependent on the requirement. So, here’s a wonderful blog post that gives you a bigger picture of different ways to schedule a task on AWS.

This story is authored by Koushik. Koushik is a software engineer specializing in AWS Cloud Services.

Programmatically Updating Autoscaling policy on DynamoDB with boto3: Application Auto Scaling

In this blog post, we will be learning how to programmatically update the auto-scaling policy settings of a DynamoDB table. The idea is to scale it smoothly (minimal write request throttling) irrespective of the anticipated traffic spikes it receives. We do this using AWS Application Auto Scaling and Lambda (boto3).

Understanding how DynamoDB auto-scales

DynamoDB auto scaling works based on Cloudwatch metrics and alarms built on top of 3 parameters:

  1. Provisioned capacity
  2. Consumed capacity
  3. Target utilization

Let us understand these parameters briefly, Firstly Provisioned capacity is how many units are allocated for the table/index, while Consumed capacity is how much of it is utilized. Lastly, Target utilization is the threshold we specify at how much utilization(%) we want autoscaling to happen. Accordingly provisioned capacity is increased.
Note: Capacity can be for either writing or reading, sometimes both.

Let us say, you provisioned 50 write units for the table:

  • Autoscale by 30 units when 40 units (80% ) are consumed. Then your autoscaling policy would look something like this.
Provisioned write capacity: 50 units
Target utilization: 80%
max write units: 10,000
min write units: 30

This auto-scaling policy looks fine yet it is static and can it cope up with unpredictable traffic spikes? If so how much time does it take to autoscale (15min)? What if the increased units were less than required, how do you reduce throttling?

Traffic spikes

Broadly traffic spikes could be categorized into 2 categories:

  1. Anticipated spike
  2. Sudden spike

And the traffic fluctuation curve can be any of the following:

  1. Steady rise and fall.
  2. Steep peaks and valleys.

Having a single static autoscaling policy that copes up with all the above cases or a combination of them is not possible. DynamoDB auto-scales capacity based on target capacity utilization instead of write requests. This makes it inefficient and slow.

Target capacity utilization is the percentage of consumption over Provisioned. Mathematically put (consumed/provisioned)*100.

If the traffic is unpredictable and you have to understand and improve (tweak) DynamoDB autoscaling’s internal workflow, here is a wonderful blog post that I would highly recommend.

Anticipated traffic is in direct correlation with your actions, be it a scheduled marketing event, data ingestion or even sale transactions on a festive day. In such scenarios, a workaround would be the ability to programmatically update the autoscaling policy.

Updating autoscaling policy using AWS Application Auto Scaling.

AWS has this service called AWS Application Auto Scaling. With Application Auto Scaling, you can configure/update automatic scaling for the following resources:

  • Amazon ECS services
  • Amazon EC2 Spot Fleet requests
  • Amazon EMR clusters
  • Amazon AppStream 2.0 fleets
  • Amazon DynamoDB tables and global secondary indexes throughput capacity
  • Amazon Aurora Replicas
  • Amazon SageMaker endpoint variants
  • Custom resources provided by your own applications or services
  • Amazon Comprehend document classification endpoints
  • AWS Lambda function provisioned concurrency

The scaling of the provisioned capacity of these services is managed by the autoscaling policy that is in place. AWS Application Auto Scaling service can be used to modify/update this autoscaling policy.

Here is a sample Lambda (python) code that updates DynamoDB autoscaling settings:

# update-dynamo-autoscale-settings
import os
import sys
import boto3
from botocore.exceptions import ClientError

# Add path for additional imports
sys.path.append('./lib/python3.7/site-packages')

# Initialize boto3 clients
dynamodb = boto3.resource('dynamodb')
dynamodb_scaling = boto3.client('application-autoscaling')

# Initialize variables
table_name = "your dynamo table name"
table = dynamodb.Table(table_name)

def update_auto_scale_settings(min_write_capacity: int, table_name: str): 
    max_write_capacity = 40000 #default number
    dynamodb_scaling.register_scalable_target(ServiceNamespace = "dynamodb",
                                                 ResourceId = "table/{}".format(table_name),
                                                 ScalableDimension = "dynamodb:table:WriteCapacityUnits",
                                                 MinCapacity = min_write_capacity,
                                                 MaxCapacity = max_write_capacity)
    
    # if you have indexes on the table, add their names to indexes list below
    # indexes = ["index1", "index2", "index3"]
    # for index_name in indexes:
    #     scaling_dynamodb.register_scalable_target(ServiceNamespace = "dynamodb",
    #                                              ResourceId = "table/{table_name}/index/{index_name}".format(table_name = table_name, index_name = index_name),
    #                                              ScalableDimension = "dynamodb:index:WriteCapacityUnits",
    #                                              MinCapacity = min_write_capacity,
    #                                              MaxCapacity = max_write_capacity)



#put_scaling policy is another call that needs to be made, if you want a different target utilization value.

def lambda_handler(event, context):
    
    try:

        # logic before updating

        write_units = #number
        update_auto_scale_settings(table_name,write_units)
                
        # logic after updating

        # Insert item into dynamo
        # table.put_item(Item = record)
                    
    except Exception as e:
        raise e
    
    else:
        print("success")

I hope it was helpful. Thanks for the read!

This story is authored by Koushik. Koushik is a software engineer and a keen data science and machine learning enthusiast.

Image Text Detection with Bounding Boxes using OpenCV in React Native Mobile App

In our earlier blog post, we had built a Text Detection App with React Native using AWS Rekognition. The Text Detection App basically detects the texts and their dimensions in the captured image. This blog is an extension to it, where we shall learn how to draw Bounding Boxes using the dimensions of the detected text in the image. Assuming you had followed our earlier blog and created the Text Detection App we will proceed further.

The following diagram depicts the architecture we will be building. 

The React app sends the image to be processed via an API call, detect_text lambda function stores it in S3 and calls Amazon Rekognition with its URL to get the detected texts along with their dimensions. With this data it invokes the draw_bounding_box lambda function, it fetches the image from S3, draws the bounding boxes and stores it as a new image. With new URL it responds back to detect_text lambda which in turn responds back to the app via API gateway with the image URL having bounding boxes.

In our previous blog we already have finished detecting the text part, let us look at creating the rest of the setup.

We will use another AWS lambda function to draw bounding boxes and that function would need OpenCV.

OpenCV (Open Source Computer Vision Library) is an open-source computer vision and machine learning software library which was built to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products.

Preparing the package for draw_bounding_box Lambda:

We need OpenCV, Numpy libraries for image manipulation but lambda doesn’t support these libraries by default. We have to prepare them and upload them. So, we will be preparing the lambda code as a package locally and then upload.

To prepare these libraries, follow this link. After finishing the process you will get a zip file. Unzip the file and copy the below lambda code in .py file.

Note: The name of this .py file should be the same as your lambda index handler.
default name: lambda_handler

lambda_handler.py:

import cv2
import boto3
import numpy as np
import base64
import json

def lambda_handler(event, context):
    bucketName='<-Your-Bucket-Name->'
    s3_client = boto3.client('s3')

    s3 = boto3.resource('s3')
    bucket = s3.Bucket(bucketName)

    # reading image name from event object
    obj = bucket.Object(key=event['image'])
    # Image we read from S3 bucket
    response = obj.get()

    imgContent = response["Body"].read()
    np_array = np.fromstring(imgContent, np.uint8)
    ''' we can read an image using cv2.imread() or np.fromstring(imgContent, np.uint8) followed by image_np = cv2.imdecode(np_array, cv2.IMREAD_COLOR).    '''
    image_np = cv2.imdecode(np_array, cv2.IMREAD_COLOR)

    '''
    imdecode reads an image from the specified buffer in the memory. 
    If the buffer is too short or contains invalid data, the empty matrix/image is returned.
    In OpenCV you can easily read in images with different file formats (JPG, PNG, TIFF etc.) using imread
    '''
    height, width, channels = image_np.shape
    # reading dimensions from the event object
    dimensions=json.loads(event['boundingBoxDimensions'])

    for dimension in dimensions:
        leftbox = int(width * dimension['Left'])
        topbox = int(height * dimension['Top'])
        widthbox = int(width * dimension['Width'])
        heightbox = int(height * dimension['Height'])
        # Using cv2.rectangle, we will draw a rectangular box with respect to dimensions
        k=cv2.rectangle(image_np, (leftbox, topbox), (leftbox+widthbox, topbox+heightbox) ,(0, 255, 0), 2)

    # used to write the image changes in a local image
    # For that create any folder(here tmp) and place any sample image(here sample.jpeg) in it. If not, we would encounter the following error.
‘Utf8’ codec can’t decode byte 0xaa in position 1: invalid start byte: UnicodeDecodeError.
 
    cv2.imwrite("/tmp/sample.jpeg", k)

    newImage="<-New-Image-Name->"
    # we put the image in S3. And return the image name as we store the modified image in S3
    s3_client.put_object(Bucket=bucketName, Key=newImage, Body=open("/tmp/sample.jpeg", "rb").read())

    return {
        'statusCode': 200,
        'imageName':newImage,
        'body': 'Hello from Lambda!'
    }

The package folder structure would look like below.

As these files exceed the Lambda upload limit, we will be uploading them to S3 and then add it from there.

Zip this lambda-package and upload it to S3. You can paste its S3 URL to your function code and change the lambda runtime environment to use Python 2.7 (OpenCV dependency).

Invoking draw_bounding_box lambda

The detect_text lambda invokes draw_bounding_box lambda in RequestResponse mode, which means detect_text lambda waits for the response of draw_bounding_box lambda.

The draw_bounding_box lambda function reads the image name and box dimensions from the event object. Below is the code for detect_text lambda which invokes the draw_bounding_box lambda function.

detect_text.js

const AWS = require('aws-sdk');
// added package
const S3 = new AWS.S3({signatureVersion: 'v4'});

var rekognition = new AWS.Rekognition();
var s3Bucket = new AWS.S3( { params: {Bucket: "<-Your-Bucket-Name->"} } );
var fs = require('fs');
// To invoke lambda function
var lambda = new AWS.Lambda();

exports.handler = (event, context, callback) => {
    let parsedData = JSON.parse(event);
    let encodedImage = parsedData.Image;
    var filePath = parsedData.name;

    let buf = new Buffer(encodedImage.replace(/^data:image\/\w+;base64,/, ""),'base64');
    var data = {
        Key: filePath, 
        Body: buf,
        ContentEncoding: 'base64',
        ContentType: 'image/jpeg'
    };
    s3Bucket.putObject(data, function(err, data){
        if (err) { 
            console.log('Error uploading data: ', data);
            callback(err, null);
        } else {
            var params = {
                Image: {
                    S3Object: {
                        Bucket: "your-s3-bucket-name", 
                        Name: filePath
                    }
                }
            };

            rekognition.detectText(params, function(err, data) {
                if (err){
                    console.log(err, err.stack);
                    callback(err);
                }
                else{
                    console.log("data: ",data);
                    var detectedTextFromImage=[];
                    var geometry=[];
                    for (item in data.TextDetections){
                      if(data.TextDetections[item].Type === "LINE"){
                        geometry.push(data.TextDetections[item].Geometry.BoundingBox);
                        detectedTextFromImage.push(data.TextDetections[item].DetectedText);
                      }
                    }
                    var dimensions=JSON.stringify(geometry);
                    var payloadData={
                        "boundingBoxDimensions":dimensions,
                        "image": filePath
                    };

                    var params = {
                        FunctionName: 'draw_bounding_box',
                        InvocationType: "RequestResponse",
                        Payload: JSON.stringify(payloadData)
                    };
                    
                    lambda.invoke(params, function(err, data) {
                        if (err){
                            console.log("error occured");
                            console.log(err);
                        }
                        else{
                            var jsondata=JSON.parse(data.Payload);
                            var params = {
                                Bucket: "your-s3-bucket-name", 
                                Key: jsondata.imageName,
                            };
                            s3Bucket.getSignedUrl('getObject', params, function (err, url) {
                                var responseData={
                                        "DetectedText":detectedTextFromImage,
                                    "url":url
                                }
                                callback(null, responseData);
                            });                            
                        }
                    });
                    console.log("waiting for response");
                }
            });
        }
    });
};

Everything is similar except the rekognition.detectText() function. Upon success, we are storing the detected text in a list and dimensions in another list. Next, we need to pass the dimensions list and image name as arguments to the draw_bounding_box lambda function.

var payloadData={
    "boundingBoxDimensions":dimensions,
    "image": filePath
};

var params = {
    FunctionName: 'draw_bounding_box',
    InvocationType: "RequestResponse",
    Payload: JSON.stringify(payloadData)
};
lambda.invoke(params, function(err, data) {
    if (err){
        console.log("error occured");
        console.log(err);
    }
    else{
        var jsondata=JSON.parse(data.Payload);
        var params = {
            Bucket: "your-s3-bucket-name", 
            Key: jsondata.imageName,
        };
        s3Bucket.getSignedUrl('getObject', params, function (err, url) {
            var responseData={
                "DetectedText":detectedTextFromImage,
                "url":url
            }
            callback(null, responseData);
        });                            
    }
});

Lambda.invoke()  expects two arguments where the first argument needs to be an object which contains the name of the lambda function, invocation type, payload data. And the second argument is to handle success or failure response. When the detect_text lambda function invokes the draw_bounding_box function, it will process the image and give the response back to the detect_text lambda function. Upon success, we get the JSON object which contains the modified image name. 

Next, we use s3Bucket.getSignedUrl() to get the image URL which we will send to our React Native App with detected text also as a response.

Replace the existing App.js file, in react-native with the code below.
App.js

import React, {Component} from 'react';
import {
    StyleSheet,
    View,
    Text,
    TextInput,
    Image,
    ScrollView,
    TouchableHighlight,
    ActivityIndicator
} from 'react-native';
import ImagePicker from "react-native-image-picker";
import Amplify, {API} from "aws-amplify";
Amplify.configure({
    API: {
        endpoints: [
            {
                name: "<-Your-API-name->",
                endpoint: "<-Your-end-point-url->"
            }
        ]
    }
});

class Registration extends Component {
  
    constructor(props){
        super(props);
        this.state =  {
            isLoading : false,
            showInputField : false,
            imageName : '',
            capturedImage : '',
            detectedText: []
        };
    }

    captureImageButtonHandler = () => {
        ImagePicker.showImagePicker({title: "Pick an Image", maxWidth: 800, maxHeight: 600}, (response) => {
            console.log('Response - ', response);
            if (response.didCancel) {
                console.log('User cancelled image picker');
            } else if (response.error) {
                console.log('ImagePicker Error: ', response.error);
            } else if (response.customButton) {
                console.log('User tapped custom button: ', response.customButton);
            } else {
                const source = { uri: 'data:image/jpeg;base64,' + response.data };
                this.setState({
                    imageName: "IMG-" + Date.now(),
                    showInputField: true,
                    capturedImage: response.uri,
                    base64String: source.uri
                })
            }
        });
    }

    submitButtonHandler = () => {
        this.setState({
            isLoading: true
        })
        if (this.state.capturedImage == '' || this.state.capturedImage == undefined || this.state.capturedImage == null) {
            alert("Please Capture the Image");
        } else {
            console.log("submiting")
            const apiName = "<-Your-API-name->";
            const path = "/API-path";
            const init = {
                headers: {
                    'Accept': 'application/json',
                    "Content-Type": "application/x-amz-json-1.1"
                },
                body: JSON.stringify({
                    Image: this.state.base64String,
                    name: this.state.imageName
                })
            }

            API.post(apiName, path, init).then(response => {
                this.setState({
                    capturedImage: response.url,
                    detectedText: response.DetectedText,
                    isLoading:false
                })
            });
        }
    }
  
    render() {
        let inputField;
        let submitButtonField;
        if (this.state.showInputField) {
            inputField=
                    <View style={styles.buttonsstyle}>
                        <TextInput
                            placeholder="Img"
                            value={this.state.imageName}
                            onChangeText={imageName => this.setState({imageName: imageName})}
                            style={styles.TextInputStyleClass}
                        />
                    </View>;
            submitButtonField=<TouchableHighlight style={[styles.buttonContainer, styles.submitButton]} onPress={this.submitButtonHandler}>
                            <Text style={styles.buttonText}>Submit</Text>
                        </TouchableHighlight>
            
        }
        
        return (
            <View style={styles.screen}>
                <ScrollView>
                    <Text style= {{ fontSize: 20, color: "#000", textAlign: 'center', marginBottom: 15, marginTop: 10 }}>Text Extracter</Text>

                    {this.state.capturedImage !== "" && <View style={styles.imageholder} >
                        <Image source={{uri : this.state.capturedImage}} style={styles.previewImage} />
                    </View>}
                    {inputField}
                    {this.state.isLoading && (
                        <ActivityIndicator
                            style={styles.Loader}
                            color="#C00"
                            size="large"
                        />
                    )}
                    <View>
                        {
                       this.state.detectedText.map((data, index) => {
                       return(
                           <Text key={index} style={styles.DetextTextView}>{data}</Text>
                       )})
                       }
                    </View>
                    <View style={styles.buttonsstyle}>
                        <TouchableHighlight style={[styles.buttonContainer, styles.captureButton]} onPress={this.captureImageButtonHandler}>
                            <Text style={styles.buttonText}>Capture Image</Text>
                        </TouchableHighlight>
                        {submitButtonField}
                    </View>
                </ScrollView>
            </View>
        );
    }
}

const styles = StyleSheet.create({
    Loader:{
        flex: 1,
        justifyContent: 'center',
        alignItems: 'center',
        height: "100%"
    },
    screen:{
        flex:1,
        justifyContent: 'center',
    },
    buttonsstyle:{
        flex:1,
        alignItems:"center"
    },
    DetextTextView:{
      textAlign: 'center',
    },
    TextInputStyleClass: {
      textAlign: 'center',
      marginBottom: 7,
      height: "70%",
      margin: 10,
      width:"80%"
    },
    inputContainer: {
      borderBottomColor: '#F5FCFF',
      backgroundColor: '#FFFFFF',
      borderRadius:30,
      borderBottomWidth: 1,
      width:"90%",
      height:45,
      marginBottom:20,
      flexDirection: 'row',
      alignItems:'center'
    },
    buttonContainer: {
      height:45,
      flexDirection: 'row',
      alignItems: 'center',
      justifyContent: 'center',
      borderRadius:30,
      margin: 5,
    },
    captureButton: {
      backgroundColor: "#337ab7",
      width: "90%",
    },
    buttonText: {
      color: 'white',
      fontWeight: 'bold',
    },
    horizontal: {
      flexDirection: 'row',
      justifyContent: 'space-around',
      padding: 10
    },
    submitButton: {
      backgroundColor: "#C0C0C0",
      width: "90%",
      marginTop: 5,
    },
    imageholder: {
      borderWidth: 1,
      borderColor: "grey",
      backgroundColor: "#eee",
      width: "50%",
      height: 150,
      marginTop: 10,
      marginLeft: 90,
      flexDirection: 'row',
      alignItems:'center'
    },
    previewImage: {
      width: "100%",
      height: "100%",
    }
});

export default Registration;

Below are the screenshots of the React Native App running on an Android device.
We used the below image to extract text and add bounding boxes.

The image name is generated dynamically with epoch time, which is editable.

I hope it was helpful, thanks for the read!

This story is authored by Dheeraj Kumar and Santosh Kumar. Dheeraj is a software engineer specializing in React Native and React based frontend development. Santosh specializes on Cloud Services based development.

Federated Querying across Relational, Non-relational, Object, and Custom Data Sources using Amazon Athena

Querying Data from DynamoDB in Amazon Athena

Amazon Athena now enables users to run SQL queries across data stored in relational, non-relational, object, and custom data sources. With federated querying, customers can submit a single SQL query that scans data from multiple sources running on-premises or hosted in the cloud.

Athena executes federated queries using Athena Data Source Connectors that run on AWS Lambda. Athena federated query is available in Preview in the us-east-1 (N. Virginia) region.

Preparing to create federated queries is a two-part process:

  1. Deploying a Lambda function data source connector.
  2. Connecting the Lambda function to a data source. 

I assume that you have at least one DynamoDB table in us-east-1 region.

Deploy a Data Source Connector

  • Open the Amazon Athena console and choose the Connect data source. This feature is available in the region us-east-1 only.
  • On the Connect data source console, choose Query a data source feature. And choose Amazon DynamoDB as a data source.
  • Choose Next
  • For the Lambda function, choose to Configure new function. It opens in the Lambda console in a new tab with information about the connector.
  • Under ApplicationSettings, provide the required information.
    1. AthenaCatalogName – A name for the Lambda function.
    2. SpillBucket – An Amazon S3 bucket in your account to store data that exceeds Lambda function response size limits.
    3. SpillPrefix – Data that exceeds Lambda function response size limits stores under the Spillbucket/Spillprefix.
  • Choose I acknowledge that this app creates custom IAM roles and choose Deploy.

Connect to a data source using a connector that deployed in the earlier step

  • Open the Amazon Athena console and choose the Connect data source. This feature is available in the region us-east-1 only.
  • On the Connect data source console, choose Query a data source feature. And choose Amazon DynamoDB as a data source and choose Next.
  • Configure the Lambda function, choose the name of the lambda that you created in the earlier step.
  • Configure Catalog name, enter a unique name to use for the data source in your SQL queries, such as dynamo_athena.
  • Choose Connect. Now the data source is available under the Data Sources section in Amazon Athena.

Querying Data using Federated Queries

To use this feature in preview, you must create an Athena workgroup named AmazonAthenaPreviewFunctionality and join that workgroup.

Create an Athena workgroup

  • Open the Amazon Athena console and choose Workgroup, and choose Create workgroup.
  • After creating a Workgroup, under Workgroup section select the created workgroup and choose Switch workgroup.
  • Select the Data source that was created in the earlier step in Athena. After choosing the data source, the DynamoDb tables are available in Athena in the default database.

Querying Data in Athena using SQL Queries

The following query is used to retrieve data from DynamoDB in Athena.

SELECT * FROM "data_source_connector"."database_name"."table_name";

Creating Athena table using CTAS with results of querying DynamoDB

The CTAS query looks like the following. Using the CTAS query, the format of data can be changed into the required format be it parquet, JSON, and CSV, etc.

CREATE TABLE database.table_name
WITH (
      external_location = 's3://bucket-name/data/',
      format = 'parquet')
AS 
SELECT * FROM "data_source_connector"."database_name"."table_name";

I hope this was helpful and look forward to your comments.

This story is authored by PV Subbareddy. Subbareddy is a Big Data Engineer specializing on AWS Big Data Services and Apache Spark Ecosystem.

Efficiently Tagging AWS Resources Using CLI to Better Manage Resources and Billing Costs

It is common when organizations have large workloads based on on a multitude of AWS services, they may lose track of how resources are being used. In a nutshell, identifying resources can take rigorous effort. On AWS, utilization and cost go hand in hand and tagging helps ensure that the resources are managed efficiently. In fact, one could also build insightful reports/dashboards with the tags in place.

Tagging Strategy:

For tags to be effective at scale they need to be strategically managed. Many organizations group tags into different categories like technical, business, security and automation, etc. A typical set of tags could be:

  1. Name
  2. Owner
  3. Application/Project/Product
  4. Environment
  5. Client/Customer

For more on creative tagging strategies, please read this.

Prerequisites: AWS CLI configured.

Getting all untagged resources using CLI:

As of this writing, there is no CLI command to list all untagged resources. One could follow the below steps to get the list.

Step1: List all the resources in AWS and write them to a text file

aws lambda list-functions --profile PROFILE_NAME &>> resourcesList.txt

Note: The above command is for listing details of lambda resources. The command and its output might vary with other resources. Read more here.

&>> appends the output of the command to resourcesList.txt file in the current working directory.

The output of the above command is a JSON object that looks like this:

{
    "Functions": [
        {
            "FunctionName": "Chat-Conversation-POST",
            "FunctionArn": "arn:aws:lambda:us-west-2:89XXXXXXXX14:function:Chat-Conversation-POST",
            "Runtime": "nodejs8.10",
            "Role": "arn:aws:iam::89XXXXXXXX14:role/chat-lambda-data",
            "Handler": "index.handler",
            "CodeSize": 474,
            "Description": "",
            "Timeout": 15,
            "MemorySize": 128,
            "LastModified": "2019-05-02T13:20:53.887+0000",
            "CodeSha256": "h1bxXaXXXXXXxxxxxxxxxXxxXxxxxxxXXXxxxxxxmGg=",
            "Version": "$LATEST",
            "TracingConfig": {
                "Mode": "PassThrough"
            },
            "RevisionId": "f447bca3-06f9-49d8-8a5d-c740f6aec405"
        },
        {
            "FunctionName": "Chat-Conversation-GET",
            "FunctionArn": "arn:aws:lambda:us-west-2:89XXXXXXXX14:function:Chat-Conversation-GET",
            "Runtime": "nodejs8.10",
            "Role": "arn:aws:iam::89XXXXXXXX14:role/service-role/chat-lambda-data",
            "Handler": "index.handler",
            "CodeSize": 785,
            "Description": "",
            "Timeout": 25,
            "MemorySize": 128,
            "LastModified": "2019-05-04T14:23:07.002+0000",
            "CodeSha256": "h1bxXaXXXXXXxxxxxxxxxXxxXxxxxxxXXXxxxxxxmGg=",
            "Version": "$LATEST",
            "VpcConfig": {
                "SubnetIds": [],
                "SecurityGroupIds": [],
                "VpcId": ""
            },
            "TracingConfig": {
                "Mode": "PassThrough"
            },
            "RevisionId": "210dd3fa-ba47-4e06-ab53-e34aa793b344"
        }
    ]
}

Now, one could either use multiple selection (ctrl+d) in Sublime or a python script to extract the list of resource ARN/names.

Step2: Iterate this list of resource names, and fetch tagging details for all of them & append the output of these commands to a file.

echo RESOURCE_NAME: &>> tagsList.txt

aws lambda list-tags --resource arn:aws:lambda:us-west-2:89XXXXXXXX14:function:RESOURCE_NAME --profile PROFILE_NAME &>> tagsList.txt

The output of the above command is also a JSON object:

{
    "Tags": {}
}

As you can see there is no name attribute here so, we add the resource name before command output using echo:

RESOURCE_NAME:{
    "Tags": {}
}

Let us say, the resource names we have got in resourcesList.txt are as follows:

  • new-client-acquisition
  • initiate-raw-file-ingestion
  • initiate-raw-crawler
  • raw-refined-transform
  • initiate-refined-crawler
  • check-status

Creating commands for the above resources in sublime:

Step3: Extract resources with no tags from the tagsList.txt file.

Untagged = all – tagged

From the resourcesList.txt we get all the resource names, and from the tagsList.txt we get all tagged resources. You could use both these lists to get the untagged resources.

Step4: Preparing and Updating the tags

aws lambda tag-resource --resource arn:aws:lambda:us-west-2:89XXXXXXXX14:function:RESOURCE_NAME --tags Environment=prod,Project=sales,Name=RESOURCE_NAME --profile PROFILE_NAME

Create multiple commands for each resource name with the above template.

Once you create all the commands just copy and paste them in the terminal. That would update all the resources with new tags.

This is pretty much the steps involved in tagging resources, maybe a few tweaks have to be made depending on the AWS service.

Note: Output of all the above commands are executed with default region name specified in the AWS CLI profile, if not specified in the command.

One other way of tagging resources on AWS is using Tag Editor in Resource Groups. I found it hard to work with as one couldn’t sophisticatedly search, filter or group resource names.

I hope it was helpful. For any queries or if you know a better way of tagging let us know in the comment section. Happy to discuss it further.

Thank-you!

This story is authored by Koushik. Koushik is a software engineer and a keen data science and machine learning enthusiast.

How to Customize QuickSight Dashboards for User Specific Data

We have been getting a lot of queries on how to customize a single QuickSight dashboard for user specific data. We can accomplish this by filtering the dashboard data with login username using AWS QuickSight’s Row-Level Security. To further explain this use-case, let’s consider the sales department in a company. Every day your team of sales agents contacts a list of potential customers. Now you need a single dashboard that is accessed by all the agents but only displays the list of prospects he or she is assigned to.

Note: This is completely different from filter/controls on QuickSight dashboards. If you have filters/controls/parameters set up with dynamic values being picked up from the dataset, then even that data is filtered with Row-Level security, as the underlying dataset itself is filtered with the login username.

Let’s get on with the show! I have created a hypothetical data set. This dataset has a column named assigned-agent which shall be used for filtering.

Using this dataset, I have created a dashboard that looks like below.

This dashboard is shared with two other IAM users (sales agents).

As we haven’t set up any rules both of them can access whole data.

As you can see ziva, could also access whole data and we don’t want that!

Our requirement:

User NameAgent NamePermissions
nickNick HoweCan access only his prospects
zivaZiva MedalleCan access only her prospects
managerNASuper user, can access all prospects

Creating Data Set Rules for Row-Level Security:

Create a file or a query that contains the data set rules (permissions).

It doesn’t matter what order the fields are in. However, all the fields are case-sensitive. They must exactly match the field names and values.

The structure should look similar to one of the following. You must have at least one field that identifies either users or groups. You can include both, but only one is required, and only one is used at a time. If you are specifying groups, use only Amazon QuickSight groups or Microsoft AD groups.

The following example shows a table with user names.

UserNameagent_assigned
nickNick Howe
zivaZiva Medalle
managerNick Howe,Ziva Medalle

For SQL:

/* for users*/
select User as UserName, Agent as agent_assigned
from permissions_table;

Or if you prefer to use a .csv file:

UserName,agent_assigned
"nick","Nick Howe"
"ziva","Ziva Medalle"
"manager","Nick Howe,Ziva Medalle"

Here agent_assigned is a column in the dataset, and UserName is the same as QuickSight login name.

What we are essentially doing is mapping UserName with the agent_assigned column. Let’s suppose ziva has logged in, only those records with condition agent_assigned = Ziva Medalle are picked up. Same is the case with nick.

But in the case of the manager, we want him to be a superuser, so we added all the agent names (all values of agent_assigned column).

Note: If you are using an Athena or an RDS or a Redshift or an S3 CSV file-based dataset, just make sure the output format/structure of those sources matches the above-mentioned formats.

Create Permissions Data Set:

Create a QuickSight dataset with the above data set rules. Go to Manage data, choose New data set, choose source and create accordingly. As mine is a CSV, I have just uploaded it. To make sure that you can easily find it, give it a meaningful name, for example in my case Permissions-prospects-list.

After finishing, Refresh the page as it might not appear in the data sources list while applying it to the dataset.

Creating Row-Level Security: 

Choose Permissions, From the list choose the permissions dataset that you have created earlier.

Choose the Apply data set.

Once you have applied, you should be seeing the dataset has a new lock symbol on it saying restricted.

That’s it. Now the data is filtered/secured based on username.

Manager’s Account:

Ziva’s Account:

Nick’s Account:

You could also add Users to Groups and have permissions set at the group level. More information here.

I hope it was helpful, any queries drop them in the comments section.

Thanks for the read!

This story is authored by Koushik. Koushik is a software engineer and a keen data science and machine learning enthusiast.

Machine Learning based Fuzzy Matching using AWS Glue ML Transforms

Machine Learning Transforms in AWS Glue

Machine Learning Transforms in AWS Glue

AWS Glue provides machine learning capabilities to create custom transforms to do Machine Learning based fuzzy matching to deduplicate and cleanse your data. For this we are going to use a transform named FindMatches. The FindMatches transform enables you to identify duplicate or matching records in your dataset, even when the records do not have a common unique identifier and no fields match exactly. This will not require writing any code or knowing how machine learning works. For more details about ML Transforms, please go through the docs.

Creating a Machine Learning Transform with AWS Glue

This article walks you through the actions to create and manage a machine learning (ML) transform using AWS Glue. I assume that you are familiar with using the AWS Glue console to add crawlers and jobs and edit scripts. You should also be familiar with finding and downloading files on the Amazon Simple Storage Service (Amazon S3) console.

In case you are just starting out on AWS Glue, I have explained how to create an AWS Glue Crawler and Glue Job from scratch in one of my earlier articles.
The source data used in this blog is a hypothetical file named customers_data.csv. A second file, label_file.csv, is an example of a labeling file that contains both matching and nonmatching records used to teach the transform.

Step 1: Crawl the Data using AWS Glue Crawler

At the outset, crawl the source data from the CSV file in S3 to create a metadata table in the AWS Glue Data Catalog. I created a crawler pointing to the source location (s3://bucketname/data/ml-transform/customers/).

In case you are just starting out on the AWS Glue crawler, I have explained how to create one from scratch in one of my earlier articles. If you run this crawler, it creates a customers table in the specified database (ml-transform).

Step 2: Add a Machine Learning Transform

Next, add a machine learning transform that is based on the schema of your data source table created by the above crawler.

  • On the AWS Glue console, in the navigation pane, choose ML Transforms, Add transform.
    1. For transform name, enter ml-transform. This is the name of the transform that is used to find matches in the source data.
    2. Choose an IAM role that has permission to access Amazon S3 and AWS Glue API operations.

Choose Worker type and Maximum capacity as per the requirements.
3. For Data source, choose the table that was created in the earlier step. In this, the table named customers in database ml-transform.
4. For Primary key, choose the primary key column for the table, email.

  • Choose Finish.

Step 3: How to Teach Your Machine Learning Transform

Next, teach the machine learning transform using the sample labeling file.
You can’t use a machine language transform in an extract, transform, and load (ETL) job until its status is Ready for use. To get your transform ready, you must teach it how to identify matching and non-matching records by providing examples of matching and non-matching records. To teach your transform, you can Generate a label file, add labels, and then Upload label file.

For this article, the label file I have used is label_file.csv

  • On the AWS Glue console, in the navigation pane, choose ML Transforms.
  • Choose the earlier created transform, and then choose Action, Teach.
  • If you don’t have the label file, choose I do not have labels, you can Generate a label file, add labels, and then Upload label file.

If you have the label file, choose I have labels, then choose Upload labelling file from S3.
Choose an Amazon S3 path to the sample labeling file in the current AWS Region. (s3://bucketname/data/ml-transform/labels/label_file.csv) with the option to overwrite existing labels. The labeling file must be located in S3 in the same Region as the AWS Glue console.

When you upload a labeling file, a task is started in AWS Glue to add or overwrite the labels used to teach the transform how to process the data source.

  • Choose Finish, and return to the ML transforms list.

Step 4: Estimate the Quality of ML Transform

What is Labeling?

The act of labeling is creating a labeling file (such as in a spreadsheet) and adding identifiers, or labels, into the label column that identifies matching and non-matching records. It is important to have a clear and consistent definition of a match in your source data. AWS Glue learns from which records you designate as matches (or not) and uses your decisions to learn how to find duplicate records.

Next, you can estimate the quality of your machine learning transform. The quality depends on how much labeling you have done.

  • On the AWS Glue console, in the navigation pane, choose ML Transforms.
  • Choose the earlier created transform, and choose the Estimate quality tab. This tab displays the current quality estimates, if available, for the transform.
  • Choose Estimate quality to start a task to estimate the quality of the transform. The accuracy of the quality estimate is based on the labeling of the source data.
  • Navigate to the History tab. In this pane, task runs are listed for the transform, including the Estimating quality task. For more details about the run, choose Logs. Check that the run status is Succeeded when it finishes.

Step 5: Create and Run a Job with ML Transform

In this step, we use your machine learning transform to add and run a job in AWS Glue. When the transform is Ready for use, we can use it in an ETL job.

On the AWS Glue console, in the navigation pane, choose Jobs.

Choose Add job.

In case you are just starting out on AWS Glue ETL Job, I have explained how to create one from scratch in one of my earlier articles.

  • For Name, choose the example job in this tutorial, ml-transform.
  • Choose an IAM role that has permission to access Amazon S3 and AWS Glue API operations.
  • For ETL language, choose Spark 2.2, Python 2. Machine learning transforms are currently not supported for Spark 2.4.
  • For Data source, choose the table created in Step 1. The data source you choose must match the machine learning transform data source schema.
  • For Transform type, choose to Find matching records to create a job using a machine learning transform.
  • For Transform, choose transform created in step 2, the machine learning transform used by the job.
  • For Create tables in your data target, choose to create tables with the following properties.
    • Data store type — Amazon S3
    • Format — CSV
    • Compression type — None
    • Target path — The Amazon S3 path where the output of the job is written (in the current console AWS Region)

Choose Save job and edit script to display the script editor page. The script looks like the following. After you edit the script, choose Save.

import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
from awsglueml.transforms import FindMatches

## @params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])

sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
## @type: DataSource
## @args: [database = "ml_transforms", table_name = "customers", transformation_ctx = "datasource0"]
## @return: datasource0
## @inputs: []
datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "ml_transforms", table_name = "customers", transformation_ctx = "datasource0")
## @type: ResolveChoice
## @args: [choice = "MATCH_CATALOG", database = "ml_transforms", table_name = "customers", transformation_ctx = "resolvechoice1"]
## @return: resolvechoice1
## @inputs: [frame = datasource0]
resolvechoice1 = ResolveChoice.apply(frame = datasource0, choice = "MATCH_CATALOG", database = "ml_transforms", table_name = "customers", transformation_ctx = "resolvechoice1")
## @type: FindMatches
## @args: [transformId = "eacb9a1ffbc686f61387f63", emitFusion = false, survivorComparisonField = "<primary_id>", transformation_ctx = "findmatches2"]
## @return: findmatches2
## @inputs: [frame = resolvechoice1]
findmatches2 = FindMatches.apply(frame = resolvechoice1, transformId = "eacb9a1ffbc686f61387f63", transformation_ctx = "findmatches2")
## @type: DataSink
## @args: [connection_type = "s3", connection_options = {"path": "s3://bucket-name/data/ml-transforms/output/"}, format = "csv", transformation_ctx = "datasink3"]
## @return: datasink3
## @inputs: [frame = findmatches2]
datasink3 = glueContext.write_dynamic_frame.from_options(frame = findmatches2, connection_type = "s3", connection_options = {"path": "s3:/<bucket-name>/data/ml-transforms/output/"}, format = "csv", transformation_ctx = "datasink3")
job.commit()

Choose Run job to start the job run. Check the status of the job in the jobs list. When the job finishes, in the ML transform, History tab, there is a new Run ID row added of type ETL job. 

Navigate to the Jobs, History tab. In this pane, job runs are listed. For more details about the run, choose Logs. Check that the run status is Succeeded when it finishes.

Step 6: Verify Output Data from Amazon S3 in Amazon Athena

In this step, check the output of the job run in the Amazon S3 bucket that you chose when you added the job. You can create a table in the Glue Data catalog pointing to the output location, just like the way we crawled the source data in Step 1. You can then query the data in Athena.

However, the Find matches transform adds another column named match_id to identify matching records in the output. Rows with the same match_id are considered matching records.

If you don’t find any matches, you can continue to teach the transform by adding more labels.

Thanks for the read and look forward to your comments

This story is authored by PV Subbareddy. Subbareddy is a Big Data Engineer specializing on AWS Big Data Services and Apache Spark Ecosystem.