Previous Job
Previous
Data Engineer - Exempt
Ref No.: 20-00366
Location: Pittsburgh, Pennsylvania
Job Description:
  • The Data Engineer is responsible for building and supporting systems to transform, store, and improve processes around data for research.
  • This role will focus on the research data pipeline, warehouse, databases, and BI tooling.
  • Will work with business analyst, data scientists, and other data engineers to facilitate ETL/ELT processes that move, clean, and store data.
  • The engineer will also be tasked with creating data accessibility points and tooling to enable reporting insights with ease of use and maintenance in mind.
  • The data engineer is expected to provide input to end state design and schema while enforcing best practices.
  • Design, build, and maintain efficient and progressive data infrastructure for research across disparate research silos in San Francisco, Boston, and Pune focusing on creating a transparent data environment.
  • Engage in a variety of tactical projects including but not limited to ETL, storage, visualization, reporting, web- scraping, and dashboard development.
  • Support, document, and evolve (re-architect as needed) existing core data stores.
  • Utilize ETL tooling to build, template, and rapidly deploy new pipelines for gathering and cleaning data.
  • Analyze existing data stores / data marts, clean, and migrate into a centralized data lake.
  • Work with Technology and Research leads to implement central and/or virtualized warehousing solutions.
  • Develop APIs for accessing data, for use by business users (i.e., researchers and portfolio managers).
  • Configure Tableau dashboards and reports while serving as SME for end consumers of data.
  • Identify and deploy advanced BI tooling on top of datasets including AI/Client/DL techniques and algorithms.
  • Assist in the design and development of enterprise data standards and best practices.
  • Use modern tooling to focus on progressive technology and expand business capabilities and time to market.
  • Work closely with business analysts, data scientists, and technologists through full project lifecycles which will provide deep insight on research needs, business processes, and research practices.
  • Gather requirements and analyze solution options.
  • Develop solutions and define and execute test plans.
  • Define and implement operational procedures.
  • Automate the research and review of data quality issues to ensure data accuracy and reliability.
  • Resolve data integrity and data validation issues.
  • Produce ad-hoc queries and reports for non-standard requests from Data Scientists and Data Consumers.
  • Become SMEs on the full suite of solutions delivered by the Research Data Engineering team with an eye to identify, analyze, and interpret trends or patterns to identify new solution options, define process improvement opportunities and generate value opportunities for our business partners.

Qualifications:
  • 6+ years of experience as a data engineer, software engineer, or similar
  • Strong Experience building ETL pipelines and knowledge of ETL best practices
  • Experience with overall data architecture and data routing design
  • Familiarity with data quality control tools and processes
  • Strong communication skills and a keen attention to detail
  • Experience working in investment research and/or quantitative finance
  • Advanced Degree or CFA
  • Development experience with R or Python in a data-science or research setting
  • Knowledge/Experience with financial data provider API's (Bloomberg/Factset/Datastream/MSCI)
  • Experience in EAGLE PACE Access and Oracle
  • Knowledge/Experience with the following technologies:
    • Symphony (STC)
    • Net Core
    • Snowflake
    • .Net Core
    • Dataiku
    • Cloud and distributed computing experience
    • Big Data Experience
Technical Qualifications:
  • Candidate is not expected to have expertise in all technical areas listed but should be highly proficient in several of these including:
    • SQL, R, Python, Matlab, SSIS, Pentaho/Kettle, Excel, Tableau, MongoDB, Kafka, Hive/Spark, Parquet
    • Experience with CI/CD, container, and frameworks: GitLab, Selenium, Docker, Kubernetes
    • Disciplines: Microservice Architecture, Design Patterns
    • Environment Tooling: Agile, JIRA, Confluence
    • Familiarity with RDBS and/or NoSQL and related best practices