Urgent Requirement - Same day Interview Data Engineer-Juno Beach,FL
1 view
Skip to first unread message
Kafi Sharma
unread,
Feb 25, 2022, 12:53:11 PM2/25/22
Reply to author
Sign in to reply to author
Forward
Sign in to forward
Delete
You do not have permission to delete messages in this group
Copy link
Report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to Kafi Sharma
Hi Hope you are doing well!
My name is kaffi and I am working with Alpha Silicon. We Are Headquartered in Santa Clara, CA serving customers nationwide in the United States. We have the following Job Openings...
Position: Data Engineer Location: Franklin TN (Remote till covid) Duration: Long term contract
Job Description
At least 4-7+ years of industry experience in PySpark Technical Skillset (Mandatory) Big Data Tools: Hadoop, Cloudera (CDP), PySpark, Kafka, etc. Stream-Processing Systems: Storm, Spark-Streaming, etc. Databases: Relational SQL and NoSQL databases, Snowflake, Postgres. AWS Cloud Services: Glue Services, EC2, S3, EMR, RDS, Lambda, DMS etc. Object-Oriented Languages: Python
Technical Skill Set (Optional) Visualization Tool: SAP Business Objects, Tableau Data Pipeline & Integration: Airflow, Kafka, Informatica Cloud (IICS), PowerCenter
Role and Responsibilities a. To understand data quality requirements and design framework for parsing Rule. b. Write an algorithm using PySpark to do ranking c. Implementing ETL processes using AWS Glue d. Ensure optimization of software through design reviews and code reviews e. Monitoring performance and advising any necessary infrastructure changes & defining data retention policies f. Building stream-processing systems, using solutions such as Storm or Spark-Streaming Project Specific requirement (if any): ex: shift details a. Data-oriented personality, good communication skills, and an excellent eye for details. Relevant Experience required a. Proficient understanding of distributed computing principles. b. Proficiency with Hadoop, MapReduce, HDFS. c. Strong understanding of Data Structures and Algorithms. d. Experience with integration of data from multiple data sources e. Experience with various messaging systems, such as Kafka or RabbitMQ f. Experience in scripting languages - Python. g. Experience with Cloudera/MapR/Hortonworks Apache HDFS h. Experience with scripting tools and methods to optimize