Search for More Jobs
Forward job to a friend
Apply without Registering
Apply by creating/using an account
The Geospatial Big Data Engineer will be involved in the design of big data solutions that leverage open source and cloud-based solutions within the Location360 enterprise initiative and will work with multiple teams across the organization (i.e. cloud analytics, data architects, business groups). The big data engineer will participate in the building of large-scale data processing systems, is an expert in data warehousing solutions and should be able to work with the latest (NoSQL) database technologies.
A big data engineer should embrace the challenge of dealing with petabyte or even exabytes of data daily. A big data engineer understands how to apply technologies to solve big data problems and to develop innovative big data solutions. The big data engineer generally works on implementing complex big data projects with a focus on collecting, parsing, managing, analyzing and visualizing large sets of data to turn information into insights using multiple platforms. The big data engineer should be able to develop prototypes and proof of concepts for the selected solutions. This role will drive the engineering and building of geospatial data assets to support Bayer Crop Science's Field Platform and R&D product pipeline by working in projects such as PPP, ASP3 and Climate's research farms data curation initiative.
Key responsibilities include:
• Design, build and support of cloud and open source systems to process geospatial data assets via an API-based platform
• Up to date knowledge of public domain data sets that are relevant to Bayer Crop Science's product pipeline
• Partners with data science and commercial communities to brings needed data sets into GIS and Big Data analytical environments
• Integration of key environmental data into field management systems; provides leadership in advancing Bayer Crop Science's understanding of environmental influences on field performance and risk factors
• Minimum of 3-year experience with Geospatial Information Systems (GIS) such as OpenGeo Suite, Google Maps, MapBox, or CartoDB.
• Minimum of 3-year experience with Java, Scala, Python, or similar development language.
• Extensive knowledge in different programming or scripting languages like Java, Linux, C++, PHP, Ruby, Python and/or R.
• Experience working with raster and vector data sets applying GDAL and similar spatial libraries
• Experience developing REST style and OGC APIs that serve up geospatial data leveraging GeoServer or other similar open source technologies, preferably in a cloud environment
• Experience in using Geo Server enabled services via JS libraries: Google Maps, OpenLayers
• Proven experience (2 years) with distributed systems, e.g. Mesos, Kubernetes, Spark,
Hadoop, Cassandra, distributed databases, grid computing
• Ability to build and maintain modern cloud architecture, e.g. AWS, Google Cloud, etc.
• Experience working with PostgreSQL/PostGIS for processing of both vector and raster data formats such as shp., GeoJSON, GeoHash, GeoTiff, NetCDF, PNG, JPG and others
• Experience with code versioning and dependency management systems such as GitHub, SVT, and Maven
• Experience with stream processing, e.g. Kafka
• Demonstrated knowledge of agriculture and/or agriculture oriented businesses
• Experience implementing complex big data projects with a focus on collecting, parsing, managing, analyzing and visualizing large sets of data to turn information into insights using multiple platforms.
• Demonstrated experience adapting to new technologies
• Capable to decide on the needed hardware and software design needs and act according to the decisions. The big data engineer should be able to develop prototypes and proof of concepts for the selected solutions.
• Experience with object-oriented design, coding and testing patterns as well as experience in engineering (commercial or open source) software platforms and large-scale data infrastructures should be present.
• Experience creating cloud computing solutions and web applications leveraging public and private API's
- Project Description: this individual will participate in the development of variable seeding rate prescriptions; environmental classification algorithms; precision agriculture driven analytics; design and build of soil and yield processing and publishing data pipelines; and implementation of real time data streaming workflows for weather and related environmental datasets
- Open to remote workers who have experience working remote. Be sure to include on the resume whether they will need to work remote or if they are willing to relocate.
- TOP 3 SKILLS: GeoServer, AWS, Python
- Good communication skills are a must.
- MSc is not required but a plus.
Apply by creating/using an account