OK, so we know there are servers behind these scenes and a whole chunk of other software running on them to abstract the process, it’s not like any of us think that processing doesn’t happen without CPU, memory, network and storage, but do we care?
Pay-as-you-play pricing just makes sense for some workloads, and being able to focus on writing the code, integrations or whatever else and being concerned with the outputs, just seems like a sensible evolution of IT. We’ve all been there (me again this month, so hence this blog) where a user, typically someone semi-technical (sorry Paul… you really are!) requests a “VM” on which they want to run “some code”. You drill down into this and it’s some form of intermittent and often relatively simple processing that needs doing, and in this case, they’re doing the right thing by not trying to do it on their laptop! In our case the goal was relatively simple… give a user the ability to securely upload some files produced as the output of scripts, which then need some basic processing and cleansing before the output can then be used by another set of users. A perfect case for serverless computing in 2018 (actually well before that - but hey, who’s counting).
Object Storage (in this case Amazon S3) provides a serverless storage platform, fully encrypted with access controls, allowing the original files to be uploaded. AWS Lambda functions can then be triggered automatically, without the need to write any sort of file system polling function, to process the data (and can be written in a language of our choice), finally loading the data back into another equally protected S3 bucket with Amazon SNS and SES being used to notify both the sending and receiving user. Total time elapsed to build the pipeline including the processing code… less than an hour. Hard to argue against, especially with AWS giving you the first 1 million requests a month for free (which pretty much guarantees for this use case the cost is a big fat 0).
Realistically the process is easy enough, with virtually all of it being driven by point and click UI and the only truly technical part being the code to process the files (and if you know my coding skills, you’ll know even that won’t be that technical), so this could have been implemented directly by my semi-technical end user. I can hear collective gasps from most people just thinking about the carnage giving end users the direct ability to create this type of function. But you know what, once they know about it, I’ll put money on that they start doing it anyway, which leads onto the topic of another blog I’m writing, about how we manage and alert on change in a self-service cloud environment!
Serverless computing… probably a stupid name… definitely not a stupid concept.
What are your thoughts?
Without Data Intensity as our hosting partner, there’s no way that my staffing level could remain as stable as it has been over this amount of time. Data Intensity is really an extension of our staff and part of our company. I don’t know what I would do without them; overall I would give them an ‘A+.’
Data Intensity’s expertise in Hybrid Cloud helped us optimize our existing infrastructure yet deliver greater scalability, security, and response time to our global organization.
They have proved themselves many times over – we have tested the market for price and stayed with Data Intensity; we have looked at alternative technologies and stayed with Data Intensity – and we are happy that a short-term fix has turned into a long-term relationship.