Data Engineer
Sr. Data Engineer-
Location New York- 5 day onsite candidates and Tier-1 company profiles will be preferred. Python 40%, Pyspark, Terraform, Lakehouse platforms - AWS data lake services or Databricks Proficiency in at least one cloud data Lakehouse platforms (preferably AWS data lake services or Databricks, alternatively Hadoop), at least one relational data stores (Postgres, Oracle or similar) and at least one NOSQL data stores (Cassandra, Dynamo, MongoDB or similar) Proficiency in atleast one scheduling/orchestration tools (preferably Airflow, alternatively AWS Step Functions or similar) Proficiency with data structures, data serialization formats (JSON, AVRO, Protobuf, or similar), big-data storage formats (Parquet, Iceberg, or similar), data processing methodologies (batch, micro-batching, and stream), one or more data modelling techniques (Dimensional, Data Vault, Kimball, Inmon, etc.), Agile methodology (develop PI plans and roadmaps), TDD (or BDD) and CI/CD tools (Jenkins, Git,) Strong organizational, problem-solving and critical thinking skills; Strong documentation skills Preferred skills: Experience using AWS Bedrock APIs Knowledge of Generative AI concepts (such as RAG, Vector embeddings, Model fine tuning, Agentic AI) Experience in IaC (preferably Terraform, alternatively AWS cloud formation) | ||