Aniket Kumar
Aniket Kumar
Java,Spring, Microservices,Scalable System Design, AWS, Redis,Data Structures,Elastic Search, DB2/MYSQL, Design Pattern

About Me


Software Architect with over 14 years of experience. Playing the role of an architect in the analytics platform team. Responsible for coding, designing, and architecting highly performant backend services, that drive the entire analytics engine across all the products in the organization. Lately have been involved in prompt Engineering and openAI initiatives.

I love building things. While hard engineering problems are often intrinsically fun to tackle, I'm most attracted to solving real customer problems with a business justification and which can make lives easier for users/developers.

I constantly strive for new challenges, and never shy away from burning my hands with lastest technology offering out there.

Highlights

  • As an Individual Contributor, developed a highly Scalable Self Service Platform with auditing and versioning capabilities, that could be used by the product teams across Freshworks, to configure their data in analytics without any intervention from the dev team, thereby saving significant man works.(approx-20 Devs).
  • Developed a generic scalable calculator on Spel and Java, which reduced the time to market from 30 man days to 1 week.
  • Speaker in various technical forums (BJUG) and an active technical writer. Led the design and development of in house data monitoring and capturing tool, there by helping save $ spent on propertieary software.
  • Onsite experience of working closely with the business partners in HongKong.

Interests


My personal interests include:

  • Exploring Scalable Systems. I am deeply fascinated by the design and architecture of scalable systems. As cloud technologies evolve, I continuously seek to enhance my understanding and apply these concepts in both personal and professional projects. The challenge of ensuring performance and reliability at scale is something I find particularly rewarding.
  • Innovative Tooling. I believe in leveraging tools that simplify complex tasks. For instance, I developed a simple application that interacts with APIs from my favorite web series to notify me of new episodes. Additionally, I've experimented with Arduino projects, such as building a Bluetooth sensor to control lighting from my phone, and I currently assist a school student with his Arduino project.

Recent Projects

  • Contributing to open-source has always been a goal of mine. Recently, I raised an issue with the Spring framework and am working towards submitting a pull request.

Other Activities

  • I am an active member of the Bangalore Java User Group (BJUG), where I engage with fellow developers to share knowledge and insights.

The Learning Library

Books

  • Designing Data-Intensive Applications

    This book serves as my go-to reference guide, which I often pick up and read. I also enjoy Martin Kleppmann's insightful videos on YouTube to complement the material. I'm working on quick reference notes soon, focusing on key topics like transactions across distributed systems, replication, and consensus algorithms.

Papers

  • DynamoDB paper

    Multi-tenant, designed to provide low-latency responses, typically in the single-digit millisecond range, regardless of the data size or traffic volume. Highly scalable DB, handled trillions of API calls with peak requests reaching 89.2 million per second. Practical scenarios around the operations challenges.

  • Google File System paper

    Large Files: GFS is optimized for large files, typically ranging from megabytes to gigabytes in size. It efficiently manages these files by breaking them into smaller chunks.
    Chunk Data: Files are divided into fixed-size chunks (usually 64 MB), stored across multiple servers. Each chunk is replicated for fault tolerance, ensuring data availability even in case of hardware failures.
    Metadata: GFS maintains metadata about files and their locations, including information on chunk sizes, replication status, and the mapping of chunks to storage nodes.
    Versioning: GFS supports versioning of files, allowing users to access previous versions and manage changes over time.
    Fault Tolerance: The system is designed to handle failures gracefully by replicating chunks across different machines and automatically recovering from errors.
    Append-Only Operations: GFS supports append operations efficiently, making it suitable for applications that require logging or streaming data.

Resume & Work Insights


Want to know more about my work? You can view or download my resume and a detailed PDF about my projects below: