Treat our consultants and clients the way we would like others to treat us — we are honest, stay true to our word, and work in the best interest of our clients, consultants, and candidates. Many say they work this way, but few actually do. We are a company that does. Additionally, we bring joy to the world of IT staffing and IT recruiting by making the hiring experience memorable, fun, and different.
Find and provide the best talent for clients and excellent career opportunities for consultants and candidates — whom we treat as part of our team. Interested in joining our team? Check out the opportunity below and apply today!
The Technical Specialist in this contract role will be responsible for Enterprise data warehouse design, development, implementation, migration, maintenance and operation activities. Works closely with Data Governance and Analytics team. Will be one of the key technical resources for data warehouse projects for various Enterprise data warehouse projects and building critical data marts, data ingestion to Big Data platform for data analytics and exchange with client partners. This position works closely with the Business Intelligence & Data Analytics team.
- Perform data analysis, data profiling, data quality and data ingestion in various layers using big data/Hadoop/Hive/Impala queries, PySpark programs and UNIX shell scripts.
- Participate in Team activities, Design discussions, Stand up meetings and planning Review with team.
- Follow the organization coding standard document, Create mappings, sessions and workflows as per the mapping specification document.
- Updating the production support Run book, Control M schedule document as per the production release.
- Create and update design documents, provide detail description about workflows after every production release.
- Continuously monitor the production data loads, fix the issues, update the tracker document with the issues, Identify the performance issues.
- Performance tuning long running ETL/ELT jobs by creating partitions, enabling full load and other standard approaches.
- Perform Quality assurance check, Reconciliation post data loads and communicate to vendor for receiving fixed data.
- Participate in ETL/ELT code review and design re-usable frameworks.
- Create PySpark programs to ingest historical and incremental data.
- Create Sqoop scripts to ingest historical data from EDW Oracle database to Hadoop IOP, created Hive tables and Impala views creation scripts for Dimension tables.
- Writing complex SQL queries and performed tuning based on the explain plan results.
- Extract unstructured and semi-structured data using data processor transformation in IDQ.
- Participate in meetings to continuously upgrade the Functional and technical expertise.
Required Skills / Experience:
- 8+ years of experience with Informatica PowerCenter on Data Warehousing or Data Integration projects
- Proven ability to write high quality code
- 7+ years of experience with expertise implementing complex ETL logic
- 3+ years of experience developing and enforcing strong reconciliation process
- Accountable for ETL design documentation
- 5+ years of strong SQL experience (prefer Oracle)
- 5+ years of Good knowledge of relational database, data vault and dimensional model design
- 3+ years of Basic knowledge of UNIX/Linux shell scripting
- Utilize ETL standards and practices towards establishing and following centralized metadata repository
- Computer literacy with Excel, PowerPoint, Word, etc.
- Effective communication, presentation, & organizational skills
- Ability to establish priorities & follow through on projects, paying close attention to detail with minimal supervision
- Required Education: BS/BA degree or combination of education & experience
- Familiar with Project Management methodologies like Waterfall and Agile
- Perform other duties as assigned
- Analysis, Design, development, support and Enhancements of ETL/ELT in data warehouse environment with Cloudera Big Data Technologies (Hadoop, MapReduce, Sqoop, PySpark, Spark, HDFS, Hive, Impala, StreamSets, Kudu, Oozie, Hue, Kafka, Yarn, Python, Flume, Zookeeper, Sentry, Cloudera Navigator) along with Informatica, Oracle SQL/PL-SQL, Unix commands and shell scripting;
- 2+ years of Strong development experience in creating Sqoop scripts, PySpark programs, HDFS commands, HDFS file formats (Parquet, Avro, ORC etc.), StreamSets pipeline creation, jobs scheduling, Hive/Impala queries, UNIX commands, scripting and Shell scripting etc.
Desired Skill Sets:
- Demonstrate effective leadership, analytical and problem-solving skills
- Required excellent written and oral communication skills with technical and business teams.
- Ability to work independently, as well as part of a team
- Stay abreast of current technologies in area of IT assigned
- Establish facts and draw valid conclusions
- Recognize patterns and opportunities for improvement throughout the entire organization
- Ability to discern critical from minor problems and innovate new solutions
ABOUT REVEL IT:
Revel IT (formerly known as Fast Switch) is one of the fastest-growing, privately held, IT Staffing companies in the nation. Our client base includes 32% of the Fortune 25. We have major offices in Dublin, OH, Tucson, AZ, Los Angeles, CA, and Austin, TX, and are rapidly expanding into new markets from coast to coast.
WHY REVEL IT:
- In addition to standard health and 401k benefits, we offer referral bonuses and training/continuing education opportunities.
- 5-year client retention: 99%
- No. 1 supplier with customers: 53%
- Top 3 supplier with customers: 77%
- Consultant retention: 94%
We do our jobs in a way that brings delight every day to our clients and the people who work with us. Life is simply too short to grind through every day as a small cog in a huge recruiting machine. As a young and high energy company, we aim to help consultants and candidates land fulfilling jobs that offer real career growth.
Apply with Github Apply with Linkedin Apply with Indeed