/ AWS

How I learned to stop worrying and love the serverless

With the advent of "server-less" computing, it's caused a few people to scratch their heads. We went from machines dedicated to a single server, to virtualizing multiple servers on a single machine, to hosting multiple servers on someone else's machines. Now, we run code without even having to provision a server!

Part 1 - The Good, The Bad and The Cloud
Part 2 - How I learned to stop provisioning and love the Serverless
Part 3 - Bucketful of websites
Part 4 - The Lord of the instances
Part 5 - Planning for the future

Pepe Silvia invented Serverless Computing

Since it's a completely different way of thinking about IT, development and computing, we must think of why we would use it before we talk about how we would use it. There are some architectures that lend themselves to serverless computing and others that would still require a living server.

I'm not going to promise that everyone's projects can be moved to serverless computing, or that’ll if you do move, it’ll improve the experience.  All I can promise you is that I’ll explain how serverless computing made sense for my project and how I came to that decision.

Intro to the project

The project that I’ll frame this exercise around is one of mine called PyumIpsum. It’s a Python based LoremIpsum generator. You simply enter in some words and it’ll return a specified number of randomly generated sentences. It scrapes the Wikipedia articles for them and uses an existing Markov chain python library to create models based on the content.

I'll have a dedicated blog post about this project itself as I want it to go into some more fun examples of what it can do. This post is more about how I used AWS for it.

Here's a link to the project post :)

It used to run on a $5 DigitalOcean droplet (VPS). I used a Flask based framework paired with WSGI/NGINX so it could interact with the outside world. It only had a few endpoints, so the Python code was easy to maintain. I technically could have used the built-in Python web-server, but it was only really meant for development. There was nothing wrong with this stack, but I felt like something was missing.

One of the main reasons I started to move towards making this a serverless application was the inherent idea behind it. It simply sits there waiting for a request; it could be seconds, or hours between them. So, I had a $5 a month droplet sitting out there, waiting.

               Instead of just worrying about my code, I had to worry about maintaining:

1.       My Python code

2.       NGINX Configuration

3.       My droplet (VPS)

4.       DNS Configuration

5.       Performance scaling

Since I already started moving my blog and static websites to AWS, I thought this would be a great time to move it over. While my main reason for moving the code over to AWS was to learn AWS Lambda and other services, I also needed to essentially re-write the code since it had been months since I started working on the project and I already had ideas on how to re-architect it. Like all developers, I promised myself I would properly document all my code…

I have two versions of the project currently up and functional. The first iteration where I was fumbling through the architecture and the current one where all my work is focused on now. The movement of this project to AWS will mainly be based on implementing the 2ndversion.

Moving to AWS

While I already made my mind up that I was going to move it to AWS, I still needed to decide on how exactly I would implement it there. While I could save money by purchasing a 3-year up-front, reserved t2.nano instance (I did this for my blog), I’d still have to worry about managing a server. It’d be the same thing as before, except cheaper and while learning a new technology. One step forward, two steps back.

At this point, I had already begun investigating AWS Lambda for this project. I thought that it was the perfect fit for my projects for multiple reasons:

  • Extremely Scalable – If the project become popular suddenly, I wouldn’t have to worry about scaling a server to handle the increased load.
  • A lot cheaper – Since I didn’t have to pay for a dedicated server and only paid for however long the code ran for, it allowed me to essentially run my project for free! I’m currently able to stay within the free tier of Lambda Pricing. I’ll also go into how I’m able to leverage the free-tier of other services to increase performance.
  • Note:While the move ended up being cheaper for this project, you may not have the same experience for yours. It wholly depends on your usage and which services you use.
  • Easier to maintain – I was able to only worry about my code now. I didn’t have to manage a dedicated server or a NGINX proxy. After the initial setup, I essentially turned this project into a Code-As-A-Service.

Now that I made the decision to re-implement the project in AWS Lambda, I had to start to put together the new pieces. After some investigation, I settled on a 3-piece approach:

  • Route 53 – Route 53 is Amazon’s DNS service. This was used to simply host the project domain and connect the different sub-domains to their resources.
  • API Gateway – You can think of API Gateway as a sort of switchboard for your API calls. I’m able to easily setup the different query-string and path parameters
  • AWS Lambda – This handles the Code-As-A-Service part of my project. Requests get routed to here, processed, and then sent back up the chain.

The new architecture

As for explaining the new architecture, I’ll present a document first and then go into detail about how each service plays their part.

Route 53

Route 53 is a static aspect for my projects. I use it to register the domain-names and route the DNS entries for the different sub-domains. The only time these entries should change is if I create a new endpoint within API Gateway. There are currently two main entries here, one for the api domain and the other for the home-page.

API Gateway

I’m using the API Gateway service to route the different requests to their respective Lambda functions. I have it configured with two different resources, one for the old version and the other for the new one. There’s the base-level domain which has two separate resources, sentences and topics. Those each then have their own respective parameters named Topic with a GET method attached to it.

Sentences is attached to the old version of the lambda function.

Topics is attached to the newer version of the lambda function.

Along with the available path parameters, a correct request would look something like this:

/topics/Pizza?num=5&chain=Oven,New York City

This would pass the path parameter(s) {topic} along with the query-string parameters num and chain to the new lambda function. If I wanted to use the old one, I would just change the request to:

/sentences/Pizza

The configuration of an API Gateway API could almost have its own post since it’s very easily configurable and gives you a wide range of control over your API. If there’s interest in a dedicated post, let me know!

Lambda

Here’s where the magic happens; the trip to WollyWord for the family vacation, the Golden Ticket! Once a request goes through Route 53 and gets routed to the correct resources for API Gateway, it executes a Lambda function.

What I did wrong

Learn to swim, then learn to dive

At the beginning of transitioning this project to AWS, I dove into the deep-end way too quickly. It wasn’t necessarily a case of me not knowing how to do things, more-so me making things unnecessarily complicated. My original thoughts were “Wow, AWS makes all of this available, I might as well use it”, I immediately regretted that decision.

I had originally thought of using CloudFormation templates to manage my Lambda functions and API Gateway setup. It involved creating my app with the AWS SAM CLI and managing the code/infrastructure through there. While it was working fine and completely do-able, I thought that it was just too much for too little. My API Gateway setup was working fine, my Route53 sub-domains we’re working fine, the only part I was really changing often was the Lambda code.

Now, while my project grows and incorporates more services and code, I may start to use CloudFormation to manage the different aspects of it. Though for now, I’m only ever changing the Lambda code.

Take a break/Rubber Duck Debugging

               For all the developers out there, you’re guilty of this, don’t lie to yourself. You’re working on a problem and it’s just not working. You’ve been slamming the keyboard for hours trying to figure it out and you’re starting to go crazy.

Just Walk Away

               Don’t program for an hour or two, walk away from the computer and take a breather. You’ve become so consumed by figuring out the issue that you’ve pigeon-holed your thoughts and at this point, you’re just throwing ideas at it. It’s become accuracy-by-volume instead of well thought out ideas. Take a walk and either think about things other than your project. Try to explain what your code is trying to do to a rubber duck.

               Have you ever been typing out a question on StackOverflow and have figured out what’s going wrong in the middle of explaining it? That’s because you’re forced to explain your problem in simple terms, so others may help you. Maybe sometimes you’re too far in deep and you’ve lost sight of what you’re trying to do.

Plans for the future

               While the code is completely functional, there are some improvements I have in mind for it.

Better error handling

While I have some VERY basic error handling in, I really need to focus on the use-cases of disambiguation when querying Wikipedia. I'll like just return a possible list of articles so that it doesn't throw a stack-trace.

Model caching

               The way it is currently, every time a request comes in, I must scrape the Wikipedia article(s) and generate the model for the respective topics. While I still must do some actual timing of the code, it’s around 1 second per topic. So, if you had 3 topics you wanted to generate sentences about, it would take about 3 seconds.

My original plan for the caching mechanism was to store the models themselves in DynamoDB. While it worked at the beginning, I soon found myself hitting the field size limit for DynamoDB, around 400kb per item.

               My next idea was to create a caching mechanism that saves a json representation of each model as it comes in to a S3 bucket. I'd simply use a sorted string-list of the topics as the file name.

So if I had the topics Pizza, Tupac, Flashlight, the filename would be:

flashlightpizzatupac.json

I'd then check if the file exists in S3 for the incoming request and if so, grab it and base the model generation off of that. It's so far saved me at least an average of 75% of time for each request. I'll likely dedicate a blog post to the lambda function and caching mechanism itself.

I'm well-aware that the current way I'm caching has some holes in it. What happens if I have the word(s) flash and light? The filename would still be flashlight.json

So I'm trying to think of a way to cache the models with a minimum amount of latency. I'm thinking of storing a mapping of s3 files to hashes/GUIDs in DynamoDB so it'll be easier to manage. It should only introduce a few milliseconds in latency as it queries and returns the data.

How I learned to stop worrying and love the serverless
Share this

Subscribe to Wills Thoughts