Search Job


Call Us Now

+1 908 222 8055

Job Detail

Big Data Developer

Experience Level

5+ years Year(s)


Sunnyvale, CA

Posted Date


Job Requirements



Our client is one of the world's largest financial institutions, with operations in the United States, Asia, Europe and Latin America, we provide customers with a variety of products and services, including life insurance, annuities, retirement-related services, mutual funds and investment management.


Primary Skills: Big Data Developer with Java Background

C2C OK, WebEx Interview



  • Building and maintaining data-intensive applications utilizing modern front-end and back-end technologies to deliver value to our businesses
  • Creating and maintaining optimal data pipeline architecture assembling large, complex data sets that meet functional / non-functional business requirements.
  • Identifying, designing and implementing internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability.
  • Support building the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Relational, NoSQL and Hadoop technologies.
  • Building and maintain data services and data consumption tools that utilize the data pipeline to deliver actionable insights into key business performance metrics.
  • Creating data visualizations for analytics and assisting other team members with using our data products.
  • Working with partners including the Architecture, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
  • Preparing and maintaining physical models and implementation-level details that affect the continuum of disciplines involved in the architecture, design, implementation and management of enterprise information.
  • Collaborating with other teams to design, develop data tools that support both operations and data application use cases.
  • Analyzing large data sets using components from the Hadoop ecosystem.
  • Evaluating big data technologies and prototype solutions to improve our data processing architecture.
  • Experience building processes supporting data transformation, data structures, metadata, dependency and workload management.
  • Experience with big data technologies: Hadoop, Hive, Impala, Hbase, PySpark, PIG, SQOOP, HDFS, Solr, Apache stacks and Cloud technologies.
  • Minimum 5-year experience in query language skills including SQL, HiveQL and experience working with Relational, NoSQL & Hadoop systems
  • Experience with object-oriented/object function scripting languages: Python, Java, etc. C++ would be an advantage.
  • Experience with Denodo Data Virtualization Platform (nice to Have)
  • Working skills with back-end technologies: Node.js, Python, Java; front end technologies: HTML, CSS, JavaScript; data visualization tools: Tableau, Power BI
  • Experience with ETL tools such as Informatica is preferred
  • Hands-on experience implementing BI or data warehouse solutions preferred

       18 Months Contract opportunity 


Quick Contact

* Name :
*Email ID :
* Message :