About the Business:
LexisNexis Risk Solutions is the essential partner in the assessment of risk. Within our Business Services vertical, we offer a multitude of solutions focused on helping businesses of all sizes drive higher revenue growth, maximize operational efficiencies, and improve customer experience. Our solutions help our customers solve difficult problems in the areas of Anti-Money Laundering/Counter Terrorist Financing, Identity Authentication & Verification, Fraud and Credit Risk mitigation and Customer Data Management. You can learn more about LexisNexis Risk at the link below, https://risk.lexisnexis.com
About The Team:
Highly collaborative and supportive team environment where engineers help each other grow.Success comes from curiosity, learning quickly, adapting to new challenges, and delivering value through partnership
Team members thrive when they approach problems with ownership, pragmatism, and a willingness to solve ambiguous challenges with guidance and collaboration.
Key Responsibilities
End‑to‑end data ownership: Partner with Product, Data, and Engineering teams to contribute to data initiatives from design through deployment, with support from senior engineers, while ensuring security, compliance, and reliability across data pipelines and workflows.
Data‑driven experiences: Transform complex clinical and operational datasets into intuitive, high‑quality, and discoverable data assets that support internal stakeholders and downstream product experiences.
Engineering Best Practices: Apply engineering best practices, participate in technical discussions, and learn from architectural decisions led by senior engineers, while contributing to shared standards and documentation across cross‑disciplinary teams.
AI‑leveraged Engineering: Use LLMs to accelerate tasks such as documentation, code generation, data modeling, test/synthetic data creation, and workflow automation as part of established development practices.
Required Qualifications
Experience in SQL and practical experience with at least one of of Python, Scala, or Java.
Experience working with distributed data processing frameworks such as Apache Spark and platforms like Databricks.
Experience with AWS services (S3, Lambda, EMR, DynamoDB, CloudWatch, or equivalent).
Basic to working knowledge of Terraform or other infrastructure‑as‑code frameworks.
Solid understanding of database concepts including relational and NoSQL (e.g., MongoDB).
Hands-on experience or familiarity with Kafka or other event‑driven systems.
Familiarity with CI/CD tools such as GitLab CI, GitHub Actions, or similar.
Ability to complete assigned work independently while collaborating closely with senior engineers.
Strong problem‑solving skills, along with clear written and verbal communication and documentation abilities.
Experience with medical, clinical, or regulated data (HIPAA, HITRUST, etc.) is a plus.
Foundational experience or interest in data analytics, data modeling, or data product development.
Experience supporting data and application teams building end‑to‑end data‑driven features.
Proficiency in development languages such as Python/Scala/Java, SQL, Spark or similar coding or scripting languages.
Experience with, Databricks, AWS, Terraform, GitLab/GitHub, MongoDB, Kafka, CI/CD and similar technologies.
File management skills and logical problem solving.
Ability to work with data models and structured/unstructured data.
Working knowledge of industry engineering practices (e.g., code coverage, naming conventions, encapsulation).
Familiarity with Agile methodologies.
Strong understanding of data manipulation and transformation techniques.
Ability and desire to learn new tools, processes, and technologies.
Attention to detail and strong written/verbal communication skills.
We know your well-being and happiness are key to a long and successful career. We are delighted to offer country specific benefits. Click here to access benefits specific to your location.
This website uses cookies to ensure you get the best experience. Learn more