Technology. Flexibility. Diversity. At the center of it all are the Digital Accelerator and Advanced Analytics teams at Cummins, working together as a high-energy startup within a Fortune 500 organization. At this Midwestern technology hub, today’s sharpest, most curious minds transform what-ifs into realities.
#LifeAtCummins is about POWERING YOUR POTENTIAL. You’ll have global opportunities to develop your career and make your community a better place - to break ground professionally and be your best personally.
This is an exciting opportunity in Columbus, IN for a Senior Machine Learning Engineer.
* Solves complex analytical problems using quantitative approaches through a combination of analytical, mathematical and technical skills.
* Researches, designs, implements and validates complex algorithms to analyze diverse sources of data to achieve targeted outcomes by leveraging complex statistical and predictive modeling concepts.
* Participates in projects to support key objectives and business goals through the use of data science methodology.
* Leverages advanced analytics and data science to solve complex business problems.
* Creates multiple algorithms using complex statistical methodologies through the use of statistical programming languages and tools.
* Partners with domain experts to verify model capabilities.
* Partners with IT resources to enable appropriate data flow/data model, development using appropriate tools/technology, rapid prototyping and informs the design of analytical products.
* Partners with less experienced employees on data science tools and methodologies.
* Own key activities in planning and executing the product ionization of analytics models: model robust implementation, deployment and management of models developed by data scientists.
* Works with multiple teams of stakeholders to understand their goals, challenges, and roadmaps to inform your product ionization roadmap and rapidly architect, design, prototype, and implement architectures to tackle the Big Data and Data Science needs for the data science team.
* Researches, experiment, and utilize leading Big Data methodologies, such as Hadoop, Spark, Redshift, and Microsoft Azure, AWS, for pipeline implementation, Architect, implement and test data processing pipelines, and data mining / data science algorithms on a variety of hosted settings, such as AWS, Azure, Databricks, etc.
* Responsible for quality and reliability of deployed pipelines and services
* Regularly and measurably improve and become a champion of team process and best practices.
* Works with minimal supervision under tight time constraints and respond to rapidly evolving requirements.
Abstract Reasoning - Envisions a solution before implementation by analyzing data, extracting patterns and relationships to establish a problem or solution's feasibility; develops new algorithms and analytical models using process diagrams, flow charts, and textual documentation to explain or conceptualize a complex problem.
Data Mining - Identifies relationships and patterns in data by using a suite of data exploration and data visualization techniques using tools such as PowerBI, R Shiny, SAS JMP, and extracts insights into multivariate data by applying principles of multivariate data mining, small sample statistical inferential tests, dimension reduction techniques to understand the underlying structure of the data and enable sound conclusions upon model building.
Data Reduction - Performs data reduction in the context of data mining using variable selection techniques to maximize signal to noise ratio in large datasets for further predictive modeling.
Predictive Modeling - Develops statistical and machine learning models using appropriate variable transformations, feature selection strategies, imputation strategies, class rebalancing, resampling strategies and performance metrics to generate descriptive, explanatory or predictive models.
Data Smoothing and Filtering - Creates an approximating function to capture important features (low-frequency structures) while leaving out noise (high-frequency structures) in the data using various algorithms like moving-averages, robust aggregation schemes, robust regression schemes, fourier transforms and Kalman filters for signal processing and smoothening noisy time series data.
Statistical Foundations - Builds statistical explanatory models for regression, classification, outlier detection, anomaly detection, time series forecasting using knowledge of foundational statistics such as Null Hypotheses Significance Tests, regression models, generalized linear modeling, time series analysis, rank statistics, probability distribution fitting survival analysis, etc. to validate hypotheses or generate predictions for any given statistical or business question.
Text Analysis - Develops text analytics models using statistical techniques and natural language processing techniques such as word2vec, Latent Dirichlet Analysis (LDA), word frequency, sentiment analysis, key-phrase extraction, etc.to extract insights from, or build predictive models from unstructured text datasets.
Requirements Analysis - Evaluates relationships and interdependencies between requirements based upon their complexity and value to the business in order to determine feasibility and prioritization.
Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, Cummins IT processes, standard and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements.
Tech savvy - Anticipating and adopting innovations in business-building digital and technology applications.
Balances stakeholders - Anticipating and balancing the needs of multiple stakeholders.
Collaborates - Building partnerships and working collaboratively with others to meet shared objectives.
* Minimum of 5 years of professional experience
* Must be able to produce production-grade code for both analytics models and pipelines using the following programming language: R, Python and Scala.
* Must have experience consolidating code into version-controlled reusable R, Scala, and Python libraries/packages.
* Experience implementing Big Data platform methodologies, including but not limited to Hadoop, Spark, Redshift, and Microsoft Azure, AWS
Education, Licenses, Certifications:
* College, university, or equivalent degree in statistics, information systems or related field required.
* PhD or Master’s degree in Statistics, Econometrics, Computer Science, or equivalent experience preferred.
Compensation and Benefits
Base salary rate commensurate with experience. Additional benefits vary between locations and include options such as our 401(k) Retirement Savings Plan, Cash Balance Pension Plan, Medical/Dental/Life Insurance, Health Savings Account, Domestic Partners Coverage and a full complement of personal and professional benefits.
Cummins and E-verify
At Cummins, we are an equal opportunity and affirmative action employer dedicated to diversity in the workplace. Our policy is to provide equal employment opportunities to all qualified persons without regard to race, gender, color, disability, national origin, age, religion, union affiliation, sexual orientation, veteran status, citizenship, gender identity and/or expression, or other status protected by law. Cummins validates right to work using E-Verify. Cummins will provide the Social Security Administration (SSA) and, if necessary, the Department of Homeland Security (DHS), with information from each new employee’s Form I-9 to confirm work authorization.
Ready to think beyond your desk? Apply for this opportunity to start your career with Cummins today. careers.cummins.com
Not ready to apply but want to learn more? Join our Talent Community to get the inside track on great jobs and confidentially connect to our recruiting team https://www.cumminstalentcommunity.com/profile/join/
May 4, 2020