AWS Data Engineer/Analytics Architect Seattle, WA (Hourly W2; Hybrid)
Applicants must be currently authorized to work in the United States on a full-time basis. The employer will not sponsor applicants for work visas. The employer may not have resources available to support STEM OPT training requirements. No C2C - Pivotal does not accept unsolicited applications or resumes from third-party recruiters/agencies.
Why clients choose Pivotal Consulting:
We are a technology management consulting firm helping Fortune 500 companies improve their performance – we specialize in making People, Process, and Technology work together! Our clients count on us to deliver excellence and seek our guidance on business and technology strategy, technology modernization, and cloud transformation initiatives. Simply put; by listening to our clients closely and focusing on delivering quality, we bring them peace of mind.
After guiding and helping numerous clients from global enterprises to mid-market firms to non-profit organizations, we are now experiencing breakthrough growth!
What we are looking for:
Pivotal Consulting is seeking a senior-level AWS Data Engineer / Analytics Architect to design, build, and operationalize scalable cloud‑based data platforms, pipelines, and analytics solutions for a major government/public sector data modernization initiative. The consultant will own end‑to‑end data architecture and engineering on AWS, enabling secure, reliable, and high‑quality data delivery and analytics across multiple projects and domains.
You will collaborate closely with engineers, product managers, data analysts, BI developers, data scientists, and business stakeholders to translate ambiguous business needs into production‑ready data assets, models, and visualizations.
What you will do: Architecture & Strategy
Conceptualize, design, and own scalable data architectures supporting multiple initiatives and products.
Define data models, schemas, and standards for analytics, reporting, and operational use (dimensional, relational, semantic).
Evaluate architectural tradeoffs balancing performance, scalability, security, governance, and cost.
Influence product and engineering decisions through data‑driven architectural guidance.
Data Engineering & Development (AWS Focus)
Design, develop, and maintain production‑ready ETL/ELT pipelines for structured and unstructured data using AWS Glue, Lambda, Step Functions, and S3.
Implement and manage large‑scale data lake and data warehouse environments leveraging AWS services such as S3, Redshift, Athena, and related technologies.
Develop and optimize data ingestion frameworks, transformations, and orchestration patterns, including serverless data workflows.
Develop and maintain infrastructure‑as‑code (IaC) using Terraform or AWS CloudFormation for data platform components.
Build dashboards and analytical visualizations using tools such as Tableau, Power BI, or QuickSight.
Governance, Security, Quality & Compliance
Implement data security models aligned with privacy, compliance, and governance requirements, including government and regulated environment standards.
Establish and evolve data quality, validation, and monitoring frameworks; identify and remediate data quality issues across pipelines and source systems.
Ensure adherence to governance, metadata, cataloging, and lineage standards.
Optimization, Operations & Documentation
Continuously optimize data pipelines, queries, and systems for performance, maintainability, scalability, and cost efficiency.
Provide recommendations for cost‑efficient cloud architecture and resource utilization on AWS.
Document architecture, processes, and code to ensure operational readiness and maintainability.
Support knowledge transfer and enablement for client teams.
Collaboration & Leadership
Partner with engineering, product, analytics, and business teams to identify high‑impact data opportunities.
Provide technical guidance and architectural leadership across cross‑functional teams.
Communicate progress, risks, and issues clearly with both technical and non‑technical stakeholders.
Key Deliverables & In‑Scope Activities
Data architecture designs and diagrams.
Logical and physical data models.
Production‑ready ETL/ELT pipelines and orchestration workflows.
AWS data platform implementations (data lake, data warehouse, and analytics services).
Data quality and validation frameworks.
Dashboards and analytical visualizations.
Documentation for pipelines, models, governance, and operations.
Optimization and performance improvement recommendations.
Status reports and knowledge transfer materials.
What makes you a good fit: Required Qualifications
Bachelor’s degree in Computer Science, Information Systems, or related field, or equivalent experience.
7+ years of experience in data engineering, analytics engineering, or data architecture, including cloud‑based architectures.
3+ years of hands‑on experience with AWS services for data integration, storage, and analytics (e.g., S3, Redshift, Glue, Lambda, Athena, Step Functions).
Proven experience designing and delivering scalable data architectures and production‑grade pipelines.
Advanced SQL expertise and strong data modeling skills (dimensional, relational, semantic).
Strong programming proficiency in Python or equivalent scripting language.
Familiarity with ETL orchestration tools and serverless data workflows.
Experience with structured and unstructured data sources.
Deep understanding of data governance, privacy, and security best practices, including in federal or regulated environments.
Experience building dashboards or analytical visualizations (e.g., Tableau, Power BI, QuickSight).
Ability to collaborate with cross‑functional teams and communicate complex concepts clearly.
Preferred Qualifications
AWS Certified Data Analytics – Specialty or AWS Certified Solutions Architect certification.
Experience supporting public sector, civic data, or federal/DoD/Intel data initiatives and regulated environments.
Familiarity with cost‑optimization strategies in cloud data platforms.
Experience with metadata management, cataloging, and lineage tools.
Exposure to tools such as Apache Spark, Airflow, or Snowflake.
Knowledge of DevOps/DataOps practices and CI/CD automation for data systems.
Knowledge of ML‑adjacent data patterns and feature engineering.
Why our employees love working at Pivotal:
We believe our strength comes from our differences, and as a Certified Minority-Owned Business (MBE) and a majority women-led firm, we are committed to fostering and promoting a culture of diversity and inclusion. We believe our team and our community are our greatest assets and we strive to promote both daily.
From providing our employees with the time to pursue company-sponsored certifications, to supporting and partnering with multiple non-profit organizations brought forth by our employees (such as Food Lifeline, United Way, and the Seattle Humane Society), we are proud to support both our fellow Pivotalites and the causes close to their hearts.
As we grow, we are anchored and driven byour Four Core Values:
Be Engaged – We are present, committed, and accountable to our clients and to each other.
Consistently Deliver – We are dedicated and reliable by consistently delivering excellence.
Always Better – We continuously evolve, inspired to drive beyond the everyday norm.
Do Happy – Be passionate and bring fun and creativity into everything you do.
Compensation, Diversity and Benefit Information: The pay range for this position in Washington is $50-$75/hr. W2 plus benefits; however, base pay offered may vary depending on job-related knowledge, skills, candidate location, and experience.
Pivotal Consulting is committed to creating and supporting a diverse and inclusive team and serving all communities. All qualified applicants will be considered for employment regardless of race, gender, gender identity or expression, sexual orientation, religion, national origin, disability, age, or veteran status. Pivotal Consulting offers a comprehensive benefit package, including medical, dental and vision insurance, and 401k.