Job ID: 1427
Job Location: Engineering Square, LLC. 13809 Research Blvd, Ste 735, Austin, TX 78750.
Roles and Responsibilities :
- Must have Azure, Databricks, Spark, Pyspark, Python experience.
- Design and build highly scalable data pipelines for near real-time and batch data ingestion, processing, and data integration.
- Coordinates the creation and documentation of ETL pipelines/testing to meet customer needs for reporting, BI solutions, Data Science.
- Hands-on experience in configuring Azure Event Hub, Event Grid, Stream Analytics, logic/function apps and JSON.
- Expert level skills in Python, Databricks, Azure Data Factory
- Be a hands-on mentor and advocate to ensure successful adoption of new tools, processes, and best practices across the organization.
- Recognize potential issues and risks during the project implementation and suggest mitigation strategies.
- Communicate and own the process of manipulating and merging large datasets.
- Expert and key point of contact between the API teams and the project/functional leads
- Work directly with business leadership to understand data requirements; propose and develop solutions that enable effective decision-making and drives business objectives.
- Prepare advanced project implementation plans which highlight major milestones and deliverables, leveraging standard methods and work planning tools.
- Participate in the preparation of high-quality project deliverables that are valued by the business and present them in such a manner that they are easily understood by project stakeholders.
Required skills:
- Working knowledge of message-oriented middleware/streaming data technologies such as Kafka/NiFi, MQ, Azure Event Hub
- Must have strong programming skills / experience in C# / .NET, Logic App
- Must have strong programming skills / experience in Azure Functions using various protocols /triggers, Git/Github
- Experienced in the use of ETL tools and techniques and have knowledge of CI/CD
- Experience & Expertise in cloud NoSQL databases, ideally Azure/Azure Data Services/Cosmos DB or equivalent
- Knowledge of and experience scaling one or more of the popular data streaming and processing technologies like Kafka, Spark, Spark Streaming, etc.
- General knowledge and experience with configuration, load-balancing, auto-scaling, monitoring, networking, and problem-solving in a cloud environment
- Demonstrates change management and excellent communication skills.
- Degree in Information Systems, Computer Science, Engineering, or related field, or the equivalent combination of education, training and experience
How to Apply: Submit your resume to jobs@engineeringsquare.us mentioning the ‘Job ID’.