Big Data Architect
Join our Big Data Team and work on dynamic long-term projects. The majority of our team members are long-term employees who enjoy consistent work and a collaborative team approach!
Provides thought leadership to clients; assists sales team in technical sales meetings as well as the creation of development statements of work.
Leads the implementation of development statements of work; Leads Big Data developer staff (i.e. scrum master like).
Designs, develops, and implements web-based Java applications to support business requirements.
Follows approved life cycle methodologies, creates design documents, and performs program coding and testing.
Resolves technical issues through debugging, research, and investigation. Relies on experience and judgment to plan and accomplish goals.
Performs a variety of tasks. A degree of creativity and latitude is required. Typically reports to a supervisor or manager or architect.
Codes software applications to adhere to designs supporting internal business requirements or external customers.
Standardizes the quality assurance procedure for software. Oversees testing and develops fixes.
Contribute to the Design and develop high-quality software for large scale Java/Spring Batch/Hadoop distributed system.
Loading and processing from disparate data sets using appropriate technologies including but not limited to those described in the skills section.
Requires a bachelor’s degree in area of specialty and experience in the field or in a related area; with an expectation of a master’s degree in area of computer science. Will review candidates with an equivalent level of experience that hold multiple certifications in big data technologies.
Must have excellent communication skills as this will be a highly customer facing position.
Familiar with standard concepts, practices, and procedures within a particular field. (ETL, Analytics, OOA/OOD, UML, Design Patterns, Re-factoring, Networking, Unit and Component level testing).
Expert in HIVE SQL and ANSI SQL – Great hands on in Data Analysis using SQL.
Ability to write simple to complicated SQL in addition to having the ability to comprehend and support data questions/analysis using already written existing complicated queries.
Familiarity in Dev/Ops (Puppet, Chef, Python).
Understanding of Big Data concepts and common components including Hadoop Components (Pig, Hive, Flume, Kafka, Storm, MapReduce, HDF/Nifi, Falcon, Oozie, Hbase, Impala, BigSql), Spark, Cloud Components (Google Cloud Platform, AWS, Azure) and multiple languages (Java, Scala, Python).