Bachelor or Masters student with interest in data structures and algorithms. Particularly in online algorithms and functional data structures. Background in stream processing and "big" data is a plus; however, interest in stream processing is a must.
Candidate should read up on and come prepared to gain a deeper understanding of Scala. Baseline concept of Scala candidate should read up onmbefore internship including concepts of pattern matching, immutability, traits, and core collections libraries.
Internship will cover developing, on existing, and prototyping, new, streaming data processing applications and supporting stack. Technologies involved will be Kafka [Streaming & Core] + Spark [on EMR]. Stream processing application will have highly available and accommodate "heavy" traffic (upwards of 60K messages per second).
Candidate must be comfortable with linux environments/bash and git vcc