Back to Jobs
Sr. Software Engineer - Data Infrastructure (ASE)
$130,000
per year
Job Description
Imagine what you could do here. At Apple, great ideas have a way of becoming phenomenal products, services, and customer experiences very quickly. Bring passion and dedication to your job and there's no telling what you could accomplish.
Do you love solving complex challenges? Are you an inventive self-starter who takes pride in making ideas come to life on a global scale? Are you passionate about developing big-data platforms using creative algorithms to process petabytes of data with very low latency? Are you passionate about Cloud Technologies, have a proven track record in designing and implementing scalable solutions, love solving complex challenges and thrive in a collaborative team environment? If so, join Apple Services Engineering (ASE) Data Platform team to design and build a scalable big-data platform that is used across Apple.
As part of Apple Services Engineering (ASE), you will have a meaningful role in designing, developing, and deploying high-performance systems that handle millions of queries every single day. This enormous scale brings challenges that require extraordinarily creative problem-solving. By focusing on and respecting the customer's needs, you'll be responsible for helping us build the technology that works for so many customers around the world.
We are looking for engineers who are passionate about crafting big-data products. This role requires a deep understanding of developing products that are highly scalable, highly available, and fully fault tolerant.
As part of the Data Platform team, you will have a meaningful role in designing, developing, and deploying high-performance systems that handle millions of online events and queries daily. This enormous scale brings challenges that require extraordinarily creative problem-solving. We are building and supporting critical infrastructure and frameworks that provide and support services like structured and unstructured storage, caching, queueing, searching, and much more. We are looking for a strong, enthusiastic developer to join as a member of this group. You will have a tremendous amount of individual responsibility and influence over the direction of critical services for years to come. You are someone with ideas and a real passion for software delivered as a service to improve reuse, efficiency, and simplicity.
5+ years experience designing, developing, and deploying large-scale data processing frameworks and applications on cloud-based infrastructure such as AWS, GCP, etc. Strong programming expertise in Go, Java, Scala, and scripting languages, preferably with critical, large-scale distributed systems. Proficiency in Infrastructure as Code (IaC) tools (e.g Pulumi, Crossplane). Collaborate with cross-functional teams to integrate Cloud infrastructure solutions into Data platform products and services Experience with containerization and orchestration (e.g., Docker, Kubernetes). Ability to design large-scale, complex applications with excellent run-time characteristics such as low latency, fault-tolerance, and high availability Bachelor’s degree or Masters in Computer Science, Computer Engineering, or equivalent
Experience engineering modern Analytics and Data technologies like Spark, Flink, Iceberg, Trino, Jupyter, Druid, etc. at scale is a plus Experience with contribution to Open Source projects is a plus Deeply concerned about excellence and quality Loves fast-paced environment and learning/leveraging new technologies A learning attitude to continuously improve the self, team, and the organization
Description
As part of the Data Platform team, you will have a meaningful role in designing, developing, and deploying high-performance systems that handle millions of online events and queries daily. This enormous scale brings challenges that require extraordinarily creative problem-solving. We are building and supporting critical infrastructure and frameworks that provide and support services like structured and unstructured storage, caching, queueing, searching, and much more. We are looking for a strong, enthusiastic developer to join as a member of this group. You will have a tremendous amount of individual responsibility and influence over the direction of critical services for years to come. You are someone with ideas and a real passion for software delivered as a service to improve reuse, efficiency, and simplicity.
Minimum Qualifications
5+ years experience designing, developing, and deploying large-scale data processing frameworks and applications on cloud-based infrastructure such as AWS, GCP, etc. Strong programming expertise in Go, Java, Scala, and scripting languages, preferably with critical, large-scale distributed systems. Proficiency in Infrastructure as Code (IaC) tools (e.g Pulumi, Crossplane). Collaborate with cross-functional teams to integrate Cloud infrastructure solutions into Data platform products and services Experience with containerization and orchestration (e.g., Docker, Kubernetes). Ability to design large-scale, complex applications with excellent run-time characteristics such as low latency, fault-tolerance, and high availability Bachelor’s degree or Masters in Computer Science, Computer Engineering, or equivalent
Preferred Qualifications
Experience engineering modern Analytics and Data technologies like Spark, Flink, Iceberg, Trino, Jupyter, Druid, etc. at scale is a plus Experience with contribution to Open Source projects is a plus Deeply concerned about excellence and quality Loves fast-paced environment and learning/leveraging new technologies A learning attitude to continuously improve the self, team, and the organization
Company Information
Location: Cupertino, CA
Type: Hybrid
Badges:
Changemaker
Flexible Culture