Running Batch Jobs on Amazon ECS -


i'm new using aws, , more ecs. currently, have developed application can take s3 link, download data link, processes data, , output information data. i've packaged application in docker container , resides on amazon container registry. want start cluster, send s3 link each ec2 instance running docker, have container instances crunch numbers, , return results single node. don't quite understand how supposed change application @ point. need make application running in docker container service? or should send commands containers via ssh? assuming far, how communicate cluster farm out work potentially hundreds of s3 links? ideally, since application compute intensive, i'd run 1 container per ec2 instance.

thanks!

your story hard answer since it's lot of questions without lot of research done.

my initial thought make stateless.

you're on right track making them start , process via s3. should expand use sqs queue. sqs messages contain s3 link. application start up, grab message sqs, process link got, , delete message.

the next thing not output console of kind. output somewhere else. different sqs queue, or somewhere.

this removes requirement boxes talk each other. speed things up, make infinitely scalable , remove strange hackery around making them communicate.

also why 1 container per instance? 2 threads @ 50% same 1 @ 100% usually. remove requirement , can use ecs + lambda + cloudwatch scale based on number of messages. >10000, scale up, kind of thing. <100 scale down. means can throw millions of messages sqs , let ecs scale process them , output somewhere else consume.


Comments

Popular posts from this blog

java - Static nested class instance -

c# - Bluetooth LE CanUpdate Characteristic property -

JavaScript - Replace variable from string in all occurrences -