Architect

Date: 27 Feb 2026

Location: Hyderabad, India

Company: Wissen Infotech Private Limited

About Us

Established in the year 2000 in the US, we have global offices in the US, India, UK, Australia, Mexico, Vietnam, and Canada, with best-in-class infrastructure and development facilities spread across the globe. We are an end-to-end solution provider in Banking & Financial Services, Telecom, Healthcare, Manufacturing & Energy verticals and have successfully delivered $1 billion worth of projects for more than 20 Fortune 500 companies.
We assist our clients in building organizational resilience to face the future and steer their journey to digital fluency. Building enterprise systems, implementing a digital strategy, and gaining a competitive advantage with business transformation are a few of our strong capabilities today. Wissen utilizes its multi-location facilities and industry-standard processes, such as ITIL, to provide the ‘best-in-class’ cost-effective solutions that promise maximum returns on minimum IT spending.
 

Position Name

Data Engineering Architect

Experience

8- 13 yrs

Location

Bangalore

Shift Timings

Custom

Job Description

Core Technical Skills

Cloud-Native Data Engineering on AWS
Strong, hands-on expertise in AWS native data services: S3, Glue (Schema Registry, Data Catalog), Step Functions, Lambda, Lake Formation, Athena, MSK/Kinesis, EMR (Spark), SageMaker (inc. Feature Store)
Comfort designing and optimizing pipelines for both batch (Step Functions) and streaming (Kinesis/MSK) ingestion.
Data Mesh & Distributed Architectures
Deep understanding of data mesh principles: including domain-oriented ownership, treating data as a product, and the use of federated governance models
Experience enabling self-service platforms, decentralized ingestion, and transformation workflows.
Data Contracts & Schema Management
Advanced knowledge of schema enforcement, evolution, and validation (preferably AWS Glue Schema Registry/JSON/Avro)
Data Transformation & Modelling
Proficiency with modern ELT/ETL stack: Spark (EMR), dbt, AWS Glue, and Python (pandas)
AI/ML Data Enablement
Designing and supporting vector stores (OpenSearch), feature stores (SageMaker Feature Store), and integrating with MLOps/data pipelines for AI/semantic search and RAG-type workloads
Metadata, Catalog, and Lineage
Familiarity with central cataloging, lineage solutions, and data discovery (Glue Data Catalog, Collibra, Atlan, Amundsen, etc.)
Implementing end-to-end lineage, auditability, and governance processes.
Security, Compliance, and Data Governance
Design and implementation of data security: row/column-level security (Lake Formation), KMS encryption, role-based access using AuthN/AuthZ standards (JWT/OIDC), GDPR/SOC2/ISO 27001-aligned policies
Orchestration & Observability
Experience with pipeline orchestration (AWS Step Functions, Apache Airflow/MWAA) and monitoring (CloudWatch, X-Ray) in large-scale environments.
APIs & Integration
API design for both batch and real-time data delivery (REST, GraphQL endpoints for AI/reporting/BI consumption) 

Competencies

Data Cleaning & Preprocessing
Data Visualization
Excel for Data Analysis
Python/R for Data Analysis
SQL Queries
Statistical Analysis

Key Skills

Cloud-Native Data Engineering on AWS
Data Mesh & Distributed Architectures
Data Contracts & Schema Management
Data Contracts & Schema Management
in AWS native data services: S3, Glue (Schema Registry, Data Catalog), Step Functions, Lambda, Lake Formation, Athena, MSK/Kinesis, EMR (Spark), SageMaker (inc. Feature Store)
Comfort designing and optimizing pipelines for both batch (Step Functions) and streaming (Kinesis/MSK) ingestion.

Soft Skills

Good at communication
 
Good at Attitude

Qualification

    
Bachelor’s in Engineering

Certifications