Martin Ayire
Product Management
Texas, United States
Skills
Cloud Computing
About
MARTIN T. AYIRE's skills align with Programmers (Information and Communication Technology). MARTIN also has skills associated with Consultants and Specialists (Information and Communication Technology). MARTIN T. AYIRE has 8 years of work experience.
View more
Work Experience
CLOUD ENGINEER/ SOLUTION ARCHITECT
USAA
January 2017 - January 2020
- Designed and deployed scalable, highly available, fault tolerant and reliable resources in AWS Hands-on experience with AWS CLI including deploying CFTs, managing S3, EC2, IAM on CLI Developed Solution Definition Documents (SDD) and Low Level Deign Documents for public cloud Architected Amazon RDS with Multi-AZ deployment for automatic failover at the database tie Worked closely with customers, internal staff, and other stakeholders to determine planning, implementation, and integration of system-oriented projects. Configure, installed, re-sized and deployed elastic computers in AWS cloud Maintained and improved the company's cloud Infrastructure. Designed/developed aspects of migration journey - assess, mobilize, and migrate phase including leveraging CART, ADS, Migration Evaluator, DMS, Cloud Endure etc. Leveraged AWS Control Tower, AWS Organization, etc., to set up and govern a secure, multi-account AWS environment based on the company's requirements. AWS Platform: AWS Cloud Formation, AWS Lambda, AWS Systems Manager, S3, VPC, EC2, ELB, RDS, SNS, SQS, SES, Route53, Cloud Front, Service Catalog, AWS Auto Scaling, Trusted Advisor, Cloud Watch, direct connect, Transit gateway, direct connect, DynamoDB etc. Executed production change request related to network, database, Identity and access management and other infrastructure configurations. Implemented security best practices in AWS including access key rotation, multi-factor authentication, role-based permissions, enforced strong password policy, configure security groups and NACLs, S3 bucket policies etc. Developed Solution Definition Documents (SDD) and Low Level Deign Documents for public cloud Architected Amazon RDS with Multi-AZ deployment for automatic failover at the database tie Worked closely with customers, internal staff, and other stakeholders to determine planning, implementation, and integration of system-oriented projects. Used Jira to plan, track, support and close requests, tickets, and incidents. Network: VPC, VGW, TGW, CGW, IGW, NGW, etc. Monitored end-to-end infrastructure using CloudWatch and SNS for notifications. Used AWS IAM to provision authentication and authorization into AWS accounts and restrict/assign access to users and other AWS services. Built high-performing, available, resilient, and efficient 3-tier architecture for customer applications, and performed reviews for architecture and infrastructure builds, following AWS best practices. Provisioned and managed AWS infrastructures using Terraform. Optimized cost through reserved instances, selection and changing of EC2 instance types based on the resource need, S3 storage class and S3 lifecycle policies, leveraging Autoscaling, etc. Recommended and implemented security best practices in AWS including MFA, access key rotation, encryption using KMS, firewalls- security groups and NACLs, S3 bucket policies and ACLs, mitigating DDOS attacks etc. VPC peering with other accounts allows access and routing to services and users of separate account to communicate.
System Administrator
AMEF
January 2016 - October 2017
- Installation, Configuration, and Maintenance. That is, set up and configure servers (physical and virtual), workstations, and network devices. Ensure these systems run smoothly with appropriate operating systems (Windows Server, Linux, macOS) and manage storage solutions (SAN, NAS). Proactively monitor system performance to identify bottlenecks and implement solutions for optimal efficiency. Implemented security measures like firewalls, intrusion detection systems, and vulnerability management practices. Plan and implement disaster recovery procedures to ensure business continuity in case of outages. Regularly backing up critical data Design, configure, and maintain Local Area Networks (LANs), Wide Area Networks (WANs), and Virtual Private Networks (VPNs) to facilitate communication and resource sharing. Securing the network perimeter with firewalls, access control lists (ACLs), and staying vigilant against network intrusions. Also troubleshoot and optimize network performance to ensure smooth data flow. Configure, and update essential software applications. Also managed software licenses. Recommended and implemented security best practices in AWS including MFA, access key rotation, encryption using KMS, firewalls- security groups and NACLs, S3 bucket policies and ACLs, mitigating DDOS attacks etc. VPC peering with other accounts allows access and routing to services and users of separate account to communicate. Responsible for installation, configuration, management, and maintenance over Linux systems, including managing users and groups. Used Jira to plan, track, support and close requests, tickets, and incidents. Network: VPC, VGW, TGW, CGW, IGW, NGW, etc. Monitored end-to-end infrastructure using CloudWatch and SNS for notifications. Used AWS IAM to provision authentication and authorization into AWS accounts and restrict/assign access to users and other AWS services
DevSecOps / infrastructure Engineer
Ad-hoc
February 2020 - Present
- Troubleshoot and resolve database-related issues, such as data corruption, performance degradation, or system failures. I conduct regular maintenance activities like database backups, patches, and upgrades to keep the database system stable and secure. Collaborate across multiple functional and technical teams to deliver an Agile-based project. Integrate external libraries or APIs to enhance application functionality. Uses version control systems to manage code repositories and track changes. I collaborate using tools like GitHub or Bitbucket to facilitate teamwork, code review, and continuous integration. Familiarity with the syntax, features, and idioms of Go Experience with Go's standard library and common packages. Experience with Go development tools, including the Go compiler (go), package manager (go mod), and build tools. Proficiency in writing clean and maintainable code using Go best practices. Knowledge of Go profiling and debugging tools for performance optimization. Experience in building web applications using Go frameworks like Gin, Echo, or Revel Experience in building distributed systems or microservices using Go, including implementing communication protocols (e.g., gRPC) or message queues (e.g., RabbitMQ, Kafka). Experience in configuring and deploying Go applications to various platforms, such as cloud services (e.g., AWS, Azure). Development of personal projects or applications using Go to showcase your skills and understanding of the language. Leading project as a DevOps Engineer which involved me creating technology infrastructure, automation tools, and maintaining configuration management. Also being accountable for conducting training sessions for the juniors in the team, and other groups regarding how to build processes wherein the dependencies are showcased in the code and being able to answer architectural and technical DevOps infrastructure questions from the team. Responsible for creating software deployment strategies that are essential for the successful deployment of software in the work environment. And my team and I helped a client with their troubleshooting issue after they just migrated to the cloud. Troubleshoot and resolve database-related issues, such as data corruption, performance degradation, or system failures. I conduct regular maintenance activities like database backups, patches, and upgrades to keep the database system stable and secure. Work with clients to meet business needs and solve problems using the cloud. Used Kafka to collect, process, and store streaming event data or data that has no discrete beginning or end. Work with developers, architects, system administrators and other stakeholders to architect and configure Dev / Stage / QA and Prod environments in AWS (VPC, subnets, Security groups, EC2 instances, load balancer, Database, Redis, Route53, etc.). Design and implement end-to-end Continuous Integration and Continuous Delivery (CI/CD) pipelines using both Jenkins and AWS pipeline. Helm being a package manager for Kubernetes helps define, install, and upgrade even the most complex Kubernetes applications. Automate the deployment and testing of resources using Infrastructure as Code (Terraform and Cloud Formation) through pipelines using DevOps principals, allowing customers to rapidly build, test, and release code while minimizing errors. Containerization being a lightweight alternative to a virtual machine that involves encapsulating an application in a container with its own operating system. I have used It to provides portable, lightweight, standardized, and easy to deploy. Work with developers to build, deploy and orchestrate containers using Docker and Kubernetes. GitLab, Bamboo I have worked both in the back end and have extensive knowledge in amazon web services. I have used Azure, Tcp/IP. Go, Typescript and SQL Provide technical guidance and mentoring to peers, less experienced engineers, and client personnel. Designing for high availability and business continuity using self-healing-based architectures, 2018-12 - Current MS Office Web development technologies fail-over routing policies, multi-AZ deployment of EC2 instances, ELB health checks, Auto Scaling, and other disaster recovery models. AWS Platform: AWS Cloud Formation, AWS Lambda, AWS Systems Manager, S3, VPC, EC2, ELB, RDS, SNS, SQS, SES, Route53, Cloud Front, Service Catalog, AWS Auto Scaling, Trusted Advisor, Cloud Watch, direct connect, Transit gateway, direct connect, DynamoDB etc. Identity & Access Management: AWS Organization, AWS IAM, AWS Secrets Manager, etc. Leveraged AWS Control Tower, AWS Organization, etc., to set up and govern a secure, multi-account AWS environment based on the company's requirements. Implemented and managed Ansible Tower to scale automation and handle complex deployments. Monitoring and optimizing the environment to ensure costs and performance scale on demand. Utilize Kubernetes software to automate deployment, scaling, and operations of application containers across clusters of hosts. Designed and deployed scalable, highly available, fault tolerant and reliable resources in AWS Handson experience with AWS CLI including deploying CFTs, managing S3, EC2, IAM on CLI Recommended and implemented security best practices in AWS including MFA, access key rotation, encryption using KMS, firewalls- security groups and NACLs, S3 bucket policies and ACLs, mitigating DDOS attacks etc. Experience with integrating multiple data sources (Oracle, SQL Server, Teradata, MongoDB, Excel, CSV) Experience with Talend desktop enterprise studio products (DI, Big Data, MDM, etc.) Experience with IaaS using Ansible. Knowledge & experience using Enterprise scheduling tools (e.g., Control-M) Experience with MongoDB, SQL Server, Teradata, Postgres Linux and Shell scripting I have used Azure as a set of integrated software development tools for sharing code, tracking work, and shipping software on-premises. My team utilized all DevOps features or selected what we needed to enhance our collaborative workflows. We used Azure DevOps Server which is compatible with the client's existing editor or integrated development environment (IDE), enabling a cross-functional team to complete any size project. I am just about 50% heavy on Azure. collaborate with cross-functional teams, including developers, analysts, and system administrators, to understand their requirements and provide database solutions. I document database schemas, configurations, and processes to maintain clear and up to date documentation. Hands on experience with AMAZON MGN also known as Amazon Application Migration Service Conduct testing to ensure that the software meets the requirements and functions correctly. I write and execute test cases, debug issues, and fix defects to ensure software quality and reliability. Ensuring data security and compliance with industry standards and regulations. They implement access controls, encryption, and auditing mechanisms to protect sensitive data and ensure data privacy. Groovy being a powerful, optionally typed and dynamic language, with static-typing and static compilation capabilities, for the Java platform aimed at improving developer productivity, It integrates smoothly with any Java program, and immediately delivers to the application powerful features, including scripting capabilities, Domain-Specific Language authoring, runtime and compile-time metaprogramming and functional programming. Artifact being a by-product produced during the software development process. It may consist of the project source code, dependencies, binaries, or resources, and could be represented in different layout depending on the technology so, JFrog Artifactory being a universal DevOps solution provides end-toend automation and management of binaries and artifacts through the application delivery process that improves productivity across your development ecosystem. Analyze and identify performance bottlenecks in the database system. I optimize queries, fine-tune indexes, and configure database parameters to improve overall system performance and response times. Write and optimize complex SQL queries and develop database objects such as tables, views, stored procedures, and triggers. They ensure that the database code follows best practices and is efficient in handling large volumes of data. SonarQube being a Code Quality Assurance tool that collects and analyzes source code and provides reports for the code quality of a project. It combines static and dynamic analysis tools and enables quality to be measured continually over time. Implemented an ansible dynamic environment by using a python code which is available in AWS, to be used to configure the ansible master server, such that it identifies the target servers based on the tags.