Roku pioneered streaming to the TV. We connect users to the streaming content they love, enable content publishers to build and monetize large audiences, and provide advertisers with unique capabilities to engage consumers. Roku streaming players and Roku TV™ models are available around the world through direct retail sales and licensing arrangements with TV brands and pay TV operators.
With tens of million players sold across many countries, thousands of streaming channels and billions of hours watched over the platform, building scalable, highly available, fault-tolerant, big data platform is critical for our success.
Our team’s mission is to build a world class big data platform to bring value out of data for us, for our partners, and for our customers. Our goal is to democratize data, provide reporting and analytics self-service tools, and fuel existing and new business critical initiatives.
What You Will Do:
- Build highly scalable, available, fault-tolerant distributed data processing systems (batch and streaming systems) processing over 10s of terabytes of data ingested every day and petabyte-sized data warehouse
- Build quality data solutions and refine existing diverse datasets to simplified models encouraging self-service.
- Build data pipelines that optimize on data quality and are resilient to poor quality data sources.
- Own the data mapping, business logic, transformations and data quality
- Low level systems debugging, performance measurement & optimization on large production clusters
- Participate in architecture discussions, influence product roadmap, and take ownership and responsibility over new projects
- Maintain and support existing platforms and evolve to newer technology stacks and architectures
What you've done and what you bring:
- Strong SQL skills
- Proficiency in at least one scripting language, Python preferred
- Proficiency in at least one object-oriented language is desired, Java preferred
- Experience in big data technologies like HDFS, YARN, Map-Reduce, Hive, Kafka, Spark, Airflow, Redshift, etc.
- Experience with AWS, Looker is a plus
- Collaborate with cross-functional teams such as developers, analysts, and operations to execute deliverables
- 5+ years professional experience as a data or software engineer
- BS in Computer Science; MS in Computer Science preferred