Previous
			 
		
		
		Data Engineer with Kafka
		
		
		
			Next
		 
		
	 
    
    
		
			| Ref No.: | 
			21-02602 | 
		 
	
		| Location:  | 
Wayne, New Jersey
 |  
		
		
		
		
	 
	 
	
    
    
	 
   Client : Virtusa    
  Position : Data Engineer with Kafka (1 Opening)    
  Location : Irving, TX (Initially Work from Home)    
  Job Type : Fulltime Permanent    
  Interview Type : Virtusa Interview + Client Interview    
  Salary Range : $110K/Annum - $115K/Annum    
 
  Detailed Job Description:    
 
  Primary Responsibilities:    
  Your Responsibilities Will Include,    
 Developing sustainable data driven solutions with current new generation data technologies to drive our business and technology strategies  
 Building data APIs and data delivery services to support critical operational and analytical applications  
 Contributing to the design of robust systems with an eye on the long-term maintenance and support of the application  
 Leveraging reusable code modules to solve problems across the team and organization  
 Handling multiple functions and roles for the projects and Agile teams  
 Defining, executing and continuously improving our internal software architecture processes  
 Being a technology thought leader and strategist  
 
  Required:   
 
	-   BS/BA degree or equivalent combination of education/experience. 
  
	-   Intermediate to senior level experience in an Apps Development role. Demonstrated strong execution capabilities 
  
	-    Experience with Spark programming (pyspark or scala or java).   
  
	-    Experience troubleshooting JVM-related issues.   
  
	-    Experience and strategies to deal with mutable data in Hadoop.   
  
	-    Experience with Stream sets.   
  
	-    Familiarity with machine learning implementation using PySpark.   
  
	-   Experience in data visualization tools like Cognos, Arcadia, Tableau 
  
  
 
  Preferred:   
 
	-   Minimum 8+ years of IT industry experience is preferred 
  
	-   Angular.JS 4 Development and React.JS Development expertise in a up to date Java Development Environment with Cloud Technologies 
  
	-   1+ years' experience with Amazon Web Services (AWS), Google Compute or another public cloud service 
  
	-   2+ years of experience working with Streaming using Spark or Flink or Kafka or NoSQL 
  
	-   2+ years of experience working with Dimensional Data Model and pipelines in relation with the same 
  
	-   Intermediate level experience/knowledge in at least one scripting language (Python, Perl, JavaScript) 
  
	-   Hands on design experience with data pipelines, joining data between structured and unstructured data 
  
	-   Familiarity of SAS programming will be a plus 
  
	-   Experience implementing open source frameworks & exposure to various open source & package software architectures (AngularJS, ReactJS, Node, Elastic Search, Spark, Scala, Splunk, Apigee, and Jenkins etc.). 
  
	-   Experience with various noSQL databases (Hive, MongoDB, Couchbase, Cassandra, and Neo4j) will be a plus 
  
	-   Work with cloud and data architects to define robust architecture in cloud setup pipelines and work flows 
  
	-   Experience in Ab Initio technologies including, but not limited to Ab Initio graph development, EME, Co-Op, BRE, Continuous flow) 
  
  
 
  Personal Skills:   
 
	-   Experienced in managing work with distributed teams 
  
	-   Experience working in SCRUM methodology 
  
	-   Proven sense of high accountability and self-drive to take on and see through big challenges 
  
	-   Confident, takes ownership, willingness to get the job done 
  
	-   Excellent verbal communications and cross group collaboration skills 
  
  
  
 
	 
	 
     
	
	
	
    
 |