I am Zehadul Islam

Aspiring Machine Learning & AI Engineer Data Analyst Competitive Programmer Full-Stack Web Developer
Zehadul Islam
Zehadul Islam
Name: Md. Zehadul Islam
Profile: CSE Undergraduate | Aspiring Machine Learning & AI Engineer | Competitive Programmer | Full-Stack Developer
Phone: +880 1760-430187

About me

I am a Computer Science and Engineering undergraduate at Green University of Bangladesh with a strong focus on Artificial Intelligence, Machine Learning, and data-driven system design, dedicated to transforming theoretical concepts into practical, real-world AI solutions.

I have hands-on experience working on machine learning-based academic projects, including data preprocessing, supervised model training, evaluation, and explainable AI (SHAP), alongside developing secure, role-based full-stack web applications with REST API integration. I've built 5+ ML-based projects focusing on predictive analytics and model explainability.

I actively practice competitive programming, solving 400+ algorithmic problems on platforms like Codeforces, AtCoder, and CodeChef, which have strengthened my algorithmic thinking and problem-solving ability. I am seeking internship or entry-level opportunities to grow as an AI/ML-focused software engineer and researcher.

🎓 Education

B.Sc. Computer Science & Engineering

Green University of Bangladesh

2022 - Present

Higher Secondary Certificate (HSC)

Hajigonj Model Govt. College

2018 - 2020

Technical Skills

Technologies I Work With

💻 Programming Languages

Core: Python, C++ | Intermediate: Java, JavaScript | Familiar: PHP

Python

JS

JavaScript

Java

C++

C/C++

php

PHP

🤖 ML & AI

Advanced: Python ML Stack | Intermediate: TensorFlow, SHAP (XAI)

TensorFlow

SK

Scikit-learn

Pd

Pandas

SHAP (XAI)

🌐 Web Development

HTML/CSS

B

Bootstrap

MySQL

⚙️ Developer Tools & Platforms

Daily Use: Git/GitHub, VS Code, Linux | Advanced: Docker, Jupyter, Postman

VS Code

Docker

Jup

Jupyter

Linux

P

Postman

🏆 Competitive Programming

Codeforces

AtCoder

CodeChef

🛠️ Tools & Technologies

Git

LaTeX

LaTeX

MS Office

Lucidchart

Canva

Featured Projects

A showcase of my technical projects and innovative solutions

Neonatal Drug Reaction Prediction
Machine Learning

Neonatal Drug Reaction Prediction

ML system with SHAP-based explainability for predicting adverse drug effects on neonates using FAERS data.

Python Machine Learning SHAP Scikit-learn
ARFF Tree Explorer
AI & Visualization

ARFF Tree Explorer

Advanced Python data analysis tool enabling researchers to parse, explore, and visualize ARFF datasets with intelligent decision tree generation, comprehensive attribute analysis, dynamic filtering capabilities, and statistical summaries for machine learning research and academic data science workflows.

Python Data Visualization Matplotlib
Lifeline Charity Platform
Web Application

Lifeline Charity Platform

Healthcare crowdfunding platform enabling secure communication between patients seeking medical assistance, donors contributing funds, and administrators managing campaigns with granular role-based access control, real-time donation tracking, and transparent fund allocation for medical emergencies.

PHP MySQL HTML/CSS
Weather Tic-Tac-Toe
Java & Networking

Weather Tic-Tac-Toe

Multiplayer game integrating real-time weather API with intelligent move logic and Java sockets.

Java Sockets API Integration
University Management System
Database System

University Management System

Full-featured academic management system with user accounts, CRUD operations and secure data handling.

Java MySQL Database Design
Online Storytelling Hub
PHP & MySQL

Online Storytelling Hub

Dynamic blog platform with user authentication, post management, categories, and responsive design using PHP & MySQL.

PHP MySQL HTML/CSS

Experience

My professional journey

📚

Peer Technical Assistance

Apr 2023 - Feb 2024

  • Supported peers in Python, C, and Java programming fundamentals
  • Taught syntax, control flow, and logic implementation
  • Helped debug runtime and logical errors
💼

Academic & Technical Support

Oct 2021 - Jun 2022

  • Assisted students with ICT fundamentals and documentation tools
  • Taught basic programming concepts and problem-solving
  • Supported data handling and structured problem-solving tasks

Education

My academic background

🎓

B.Sc. Computer Science & Engineering

2022 - Present

Green University of Bangladesh

Focus: AI/ML, Web Development, Software Engineering

📜

HSC (Science)

2018 - 2020

Hajigonj Model Govt. College

🏆

AI/ML/IoT Bootcamp

Bondstein Technologies & ICT Division (Bangladesh)

Comprehensive training in Artificial Intelligence, Machine Learning, and IoT technologies

Tech Stack

Tools, frameworks, and technologies I work with

Languages

Python JavaScript Java C/C++ PHP

ML & AI

TensorFlow Scikit-learn Pandas NumPy SHAP Matplotlib

Web Dev

HTML5 CSS3 Bootstrap MySQL APIs

Tools & Platforms

Git/GitHub Jupyter VS Code Linux Docker Firebase

Research

Ongoing research and development

Predicting Drug Effects on Neonates using ML & Explainable AI

Ongoing Green University of Bangladesh

Conducting comprehensive research on applying machine learning models to neonatal clinical data to predict adverse drug effects. Integrating SHAP-based explainable AI to ensure model transparency and clinical relevance.

Current Focus Areas
  • Data Preprocessing: FAERS dataset cleaning and neonatal-specific feature engineering
  • Model Training: Supervised learning with Random Forest, XGBoost and other algorithms
  • Explainability: SHAP interpretability for clinical decision support systems
  • Evaluation: Comprehensive performance metrics and clinical validation

Technical Blog

Sharing knowledge and insights on AI, ML, and Software Development

Download My Resume

Get a complete overview of my qualifications, professional experience, academic background, and achievements in AI/ML and software development.

5+ Projects
2+ Yrs Experience
20+ Technologies
100% Dedication

Get In Touch

Ready to bring your ideas to life? Let's discuss how we can work together to create something extraordinary.

Quick Response Guaranteed

Typically respond in 24 hours, sometimes much faster!

Location

Dhaka, Bangladesh

Response Time

Within 24 Hours

Send Me a Message

Fill out the form below and I'll get back to you soon.

Understanding SHAP: Making ML Models Explainable

Machine Learning XAI December 2024

In the rapidly evolving field of machine learning, model interpretability has become crucial, especially in sensitive domains like healthcare. SHAP (SHapley Additive exPlanations) provides a unified approach to explaining the output of any machine learning model.

What is SHAP?

SHAP is based on Shapley values from cooperative game theory. It assigns each feature an importance value for a particular prediction. The beauty of SHAP lies in its ability to provide both local explanations (for individual predictions) and global insights (across the entire dataset).

Why SHAP Matters in Healthcare

  • Clinical Trust: Healthcare professionals need to understand why a model makes specific predictions before acting on them.
  • Regulatory Compliance: Many healthcare regulations require explainable AI systems.
  • Model Debugging: SHAP helps identify if models are learning spurious correlations or biased patterns.
  • Feature Engineering: Understanding feature importance guides data collection and preprocessing efforts.

Implementing SHAP in Python

Here's a simple example of using SHAP with a Random Forest model:

import shap
from sklearn.ensemble import RandomForestClassifier

# Train your model
model = RandomForestClassifier()
model.fit(X_train, y_train)

# Create SHAP explainer
explainer = shap.TreeExplainer(model)
shap_values = explainer.shap_values(X_test)

# Visualize
shap.summary_plot(shap_values, X_test)

Key SHAP Visualizations

  • Summary Plot: Shows feature importance across all predictions
  • Force Plot: Visualizes how features contribute to a single prediction
  • Dependence Plot: Shows the relationship between a feature and its SHAP value
  • Waterfall Plot: Explains individual predictions step by step

My Research Application

In my ongoing research on predicting neonatal adverse drug effects, SHAP has been instrumental in:

  • Identifying which patient characteristics most influence adverse event predictions
  • Validating that the model focuses on clinically relevant features
  • Providing interpretable results that can be discussed with healthcare professionals
  • Detecting and mitigating potential biases in the FAERS dataset

SHAP transforms black-box models into transparent, trustworthy systems that can be safely deployed in critical healthcare applications.

Data Preprocessing Best Practices for ML

Data Science Python November 2024

Data preprocessing is often said to consume 80% of a data scientist's time, yet it's the foundation of any successful machine learning project. Poor data quality leads to poor model performance, no matter how sophisticated your algorithms are.

The Data Preprocessing Pipeline

  1. Data Collection & Understanding
  2. Data Cleaning
  3. Feature Engineering
  4. Feature Scaling
  5. Feature Selection

1. Handling Missing Values

Missing data is inevitable. Here are common strategies:

import pandas as pd
import numpy as np

# Check missing values
print(df.isnull().sum())

# Drop rows with missing values
df_clean = df.dropna()

# Fill with mean/median/mode
df['age'].fillna(df['age'].median(), inplace=True)

# Forward fill for time series
df['price'].fillna(method='ffill', inplace=True)

2. Handling Outliers

Outliers can significantly impact model performance:

  • IQR Method: Remove values beyond 1.5 * IQR from Q1 and Q3
  • Z-Score: Remove values with |z-score| > 3
  • Domain Knowledge: Sometimes outliers are valuable signals

3. Feature Encoding

Converting categorical variables to numerical format:

# Label Encoding
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
df['category_encoded'] = le.fit_transform(df['category'])

# One-Hot Encoding
df_encoded = pd.get_dummies(df, columns=['category'])

4. Feature Scaling

Different algorithms require different scaling approaches:

from sklearn.preprocessing import StandardScaler, MinMaxScaler

# Standardization (mean=0, std=1)
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)

# Normalization (range 0-1)
scaler = MinMaxScaler()
X_normalized = scaler.fit_transform(X)

5. Feature Engineering Tips

  • Create interaction features (e.g., price_per_sqft = price / area)
  • Extract datetime components (year, month, day, hour)
  • Aggregate statistics (mean, std, max, min per group)
  • Domain-specific features based on expert knowledge

Common Pitfalls to Avoid

  • Data Leakage: Never use test data information during preprocessing
  • Overfitting: Don't create too many features without validation
  • Ignoring Class Imbalance: Use SMOTE, undersampling, or class weights
  • Not Saving Preprocessors: Always save scalers and encoders for production

Remember: Good data preprocessing is the difference between a model that works in theory and one that works in production!

Dynamic Programming: From Basics to Advanced

Algorithms C++ October 2024

Dynamic Programming (DP) is one of the most powerful algorithmic paradigms in competitive programming and software engineering. It transforms exponential-time problems into polynomial-time solutions by avoiding redundant calculations.

What is Dynamic Programming?

DP is an optimization technique that solves complex problems by breaking them down into simpler subproblems. The key insight: if you've solved a subproblem once, store its result and reuse it instead of recomputing.

When to Use DP?

DP is applicable when a problem has two properties:

  • Optimal Substructure: Optimal solution can be constructed from optimal solutions of subproblems
  • Overlapping Subproblems: Same subproblems are solved multiple times

Two Approaches: Memoization vs Tabulation

Memoization (Top-Down):

// Fibonacci with Memoization
int memo[100];
int fib(int n) {
    if (n <= 1) return n;
    if (memo[n] != -1) return memo[n];
    return memo[n] = fib(n-1) + fib(n-2);
}

Tabulation (Bottom-Up):

// Fibonacci with Tabulation
int fib(int n) {
    int dp[n+1];
    dp[0] = 0; dp[1] = 1;
    for (int i = 2; i <= n; i++)
        dp[i] = dp[i-1] + dp[i-2];
    return dp[n];
}

Classic DP Problems

1. Longest Common Subsequence (LCS)

int lcs(string a, string b) {
    int n = a.size(), m = b.size();
    int dp[n+1][m+1];
    
    for (int i = 0; i <= n; i++) {
        for (int j = 0; j <= m; j++) {
            if (i == 0 || j == 0) dp[i][j] = 0;
            else if (a[i-1] == b[j-1])
                dp[i][j] = dp[i-1][j-1] + 1;
            else
                dp[i][j] = max(dp[i-1][j], dp[i][j-1]);
        }
    }
    return dp[n][m];
}

2. 0/1 Knapsack Problem

  • Given weights and values of items, maximize value with weight constraint
  • State: dp[i][w] = max value using first i items with weight limit w
  • Transition: Either include or exclude current item

3. Coin Change Problem

  • Find minimum coins needed to make a target amount
  • Find number of ways to make the amount

Advanced Techniques

  • Space Optimization: Reduce 2D DP to 1D when possible
  • Digit DP: For problems involving number ranges and constraints
  • Bitmask DP: When state can be represented using bits
  • DP on Trees: Computing values on tree structures

Problem-Solving Strategy

  1. Define the state clearly
  2. Identify the base cases
  3. Write the recurrence relation
  4. Determine the order of computation
  5. Optimize space if needed

Master these patterns, and you'll solve 80% of DP problems in contests and interviews!

Building Neural Networks with TensorFlow 2.x

Deep Learning TensorFlow September 2024

TensorFlow 2.x has revolutionized deep learning development with its intuitive Keras API, eager execution by default, and seamless deployment capabilities. Let's explore how to build production-ready neural networks.

Setting Up Your Environment

pip install tensorflow numpy pandas matplotlib

import tensorflow as tf
from tensorflow import keras
print(tf.__version__)  # Should be 2.x

Building Your First Neural Network

Using the Sequential API for simple models:

model = keras.Sequential([
    keras.layers.Dense(128, activation='relu', input_shape=(784,)),
    keras.layers.Dropout(0.2),
    keras.layers.Dense(64, activation='relu'),
    keras.layers.Dropout(0.2),
    keras.layers.Dense(10, activation='softmax')
])

model.compile(
    optimizer='adam',
    loss='sparse_categorical_crossentropy',
    metrics=['accuracy']
)

The Functional API (For Complex Architectures)

inputs = keras.Input(shape=(784,))
x = keras.layers.Dense(128, activation='relu')(inputs)
x = keras.layers.Dropout(0.2)(x)
x = keras.layers.Dense(64, activation='relu')(x)
outputs = keras.layers.Dense(10, activation='softmax')(x)

model = keras.Model(inputs=inputs, outputs=outputs)

Training Best Practices

  • Callbacks: Use EarlyStopping, ModelCheckpoint, ReduceLROnPlateau
  • Validation Split: Always monitor validation metrics
  • Batch Size: Start with 32 or 64, adjust based on memory
  • Learning Rate: Start with 0.001, use learning rate schedules
callbacks = [
    keras.callbacks.EarlyStopping(patience=10, restore_best_weights=True),
    keras.callbacks.ModelCheckpoint('best_model.h5', save_best_only=True),
    keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
]

history = model.fit(
    X_train, y_train,
    validation_split=0.2,
    epochs=100,
    batch_size=32,
    callbacks=callbacks
)

Common Architectures

Convolutional Neural Networks (CNNs):

model = keras.Sequential([
    keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(28,28,1)),
    keras.layers.MaxPooling2D((2,2)),
    keras.layers.Conv2D(64, (3,3), activation='relu'),
    keras.layers.MaxPooling2D((2,2)),
    keras.layers.Flatten(),
    keras.layers.Dense(64, activation='relu'),
    keras.layers.Dense(10, activation='softmax')
])

Recurrent Neural Networks (RNNs):

model = keras.Sequential([
    keras.layers.LSTM(128, return_sequences=True, input_shape=(None, features)),
    keras.layers.LSTM(64),
    keras.layers.Dense(output_dim, activation='softmax')
])

Transfer Learning

Leverage pre-trained models for better performance:

base_model = keras.applications.MobileNetV2(
    input_shape=(224, 224, 3),
    include_top=False,
    weights='imagenet'
)
base_model.trainable = False

model = keras.Sequential([
    base_model,
    keras.layers.GlobalAveragePooling2D(),
    keras.layers.Dense(128, activation='relu'),
    keras.layers.Dense(num_classes, activation='softmax')
])

Model Evaluation & Debugging

  • Plot training history to detect overfitting
  • Use confusion matrices for classification
  • Visualize layer activations to understand learning
  • Monitor gradient flow to prevent vanishing/exploding gradients

Deployment

# Save model
model.save('my_model.h5')

# Load model
loaded_model = keras.models.load_model('my_model.h5')

# Convert to TensorFlow Lite for mobile
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()

TensorFlow 2.x makes deep learning accessible while maintaining the power needed for cutting-edge research and production systems!

AI in Healthcare: Predicting Drug Adverse Effects

Healthcare AI August 2024

Adverse drug reactions are a leading cause of morbidity and mortality, especially in vulnerable populations like neonates. Machine learning offers a promising approach to predict and prevent these adverse effects before they occur.

The Challenge

Neonates (infants under 28 days old) represent a unique challenge in pharmacology:

  • Immature organ systems affect drug metabolism
  • Limited clinical trial data for this population
  • Off-label drug usage is common
  • High sensitivity to dosing errors

The FAERS Dataset

The FDA Adverse Event Reporting System (FAERS) is a rich source of real-world drug safety data:

  • Scale: Millions of adverse event reports
  • Diversity: Global reporting from healthcare professionals and patients
  • Richness: Patient demographics, drug information, outcomes
  • Challenges: Inconsistent reporting, missing data, class imbalance

ML Pipeline for ADR Prediction

1. Data Preprocessing

  • Filter neonatal cases (age 0-28 days)
  • Clean and standardize drug names
  • Handle missing values strategically
  • Engineer clinical features (drug combinations, dosages)

2. Feature Engineering

  • Patient demographics (age, weight, sex)
  • Drug characteristics (class, mechanism, metabolism)
  • Temporal features (treatment duration, time to event)
  • Interaction features (polypharmacy indicators)

3. Model Selection

We evaluate multiple algorithms:

  • Random Forest: Handles non-linear relationships, robust to outliers
  • XGBoost: Superior performance, handles class imbalance
  • Neural Networks: Can learn complex patterns
  • Ensemble Methods: Combine strengths of multiple models

4. Handling Class Imbalance

from imblearn.over_sampling import SMOTE

smote = SMOTE(random_state=42)
X_resampled, y_resampled = smote.fit_resample(X_train, y_train)

Explainable AI with SHAP

In healthcare, model transparency is non-negotiable. SHAP helps us understand:

  • Which patient characteristics increase ADR risk
  • How drug combinations influence predictions
  • Whether the model learns clinically valid patterns
  • Potential biases in the training data

Evaluation Metrics

For imbalanced healthcare data, we focus on:

  • Precision-Recall AUC: Better than ROC-AUC for imbalanced data
  • F1-Score: Balances precision and recall
  • Sensitivity: Critical for not missing adverse events
  • Specificity: Avoiding false alarms that lead to alert fatigue

Clinical Integration

For real-world impact, the system must:

  • Integrate with Electronic Health Records (EHR)
  • Provide real-time predictions at prescription time
  • Offer actionable recommendations
  • Support clinical decision-making without replacing physician judgment

Ethical Considerations

  • Patient privacy and data security
  • Algorithmic bias and fairness
  • Transparency and informed consent
  • Human oversight and accountability

By combining machine learning with domain expertise, we can build systems that enhance patient safety and improve neonatal care outcomes.

Modern Web Development with Glassmorphism

Web Dev JavaScript July 2024

Glassmorphism is a design trend that creates translucent, frosted-glass effects. It's elegant, modern, and when done right, enhances both aesthetics and usability.

The Core CSS Properties

The magic happens with these properties:

.glass-card {
  background: rgba(255, 255, 255, 0.1);
  backdrop-filter: blur(10px);
  border: 1px solid rgba(255, 255, 255, 0.2);
  border-radius: 16px;
  box-shadow: 0 8px 32px rgba(0, 0, 0, 0.1);
}

Key Principles

  • Transparency: Use rgba() or hsla() with low alpha values (0.05-0.25)
  • Blur: backdrop-filter: blur() creates the frosted effect
  • Borders: Subtle borders enhance the glass illusion
  • Shadows: Soft shadows add depth without being heavy

Browser Support & Fallbacks

Not all browsers support backdrop-filter. Provide fallbacks:

.glass-card {
  background: rgba(255, 255, 255, 0.9); /* Fallback */
}

@supports (backdrop-filter: blur(10px)) {
  .glass-card {
    background: rgba(255, 255, 255, 0.1);
    backdrop-filter: blur(10px);
  }
}

Performance Optimization

Blur effects can be expensive. Optimize with:

  • will-change: Hint to browser for GPU acceleration
  • transform: translateZ(0): Force hardware acceleration
  • Limit blur radius: Keep between 8-16px for best performance
  • Avoid animations: Don't animate blur values directly
.optimized-glass {
  will-change: transform;
  transform: translateZ(0);
  backface-visibility: hidden;
}

Color Schemes

Light Theme:

.glass-light {
  background: rgba(255, 255, 255, 0.1);
  border: 1px solid rgba(255, 255, 255, 0.2);
  color: #333;
}

Dark Theme:

.glass-dark {
  background: rgba(0, 0, 0, 0.15);
  border: 1px solid rgba(255, 255, 255, 0.1);
  color: #fff;
}

Interactive Elements

Add hover effects for better UX:

.glass-button {
  background: rgba(99, 102, 241, 0.1);
  backdrop-filter: blur(10px);
  transition: all 0.3s ease;
}

.glass-button:hover {
  background: rgba(99, 102, 241, 0.2);
  transform: translateY(-2px);
  box-shadow: 0 12px 40px rgba(99, 102, 241, 0.3);
}

Accessibility Considerations

  • Ensure sufficient color contrast (WCAG AA minimum)
  • Test with screen readers
  • Provide alternative non-transparent views
  • Don't rely solely on transparency to convey information

Common Use Cases

  • Navigation bars: Floating, semi-transparent headers
  • Cards: Content containers with depth
  • Modals: Overlay dialogs that don't completely obscure background
  • Sidebars: Panels that blend with the interface

Design Tips

  • Use gradients for richer glass effects
  • Layer multiple glass elements for depth
  • Combine with subtle animations for delight
  • Keep it subtle - too much transparency reduces readability

Real-World Example: Portfolio Cards

.portfolio-card {
  background: linear-gradient(
    135deg,
    rgba(255, 255, 255, 0.1),
    rgba(255, 255, 255, 0.05)
  );
  backdrop-filter: blur(12px) saturate(180%);
  border: 1.5px solid rgba(255, 255, 255, 0.12);
  border-radius: 18px;
  transition: all 0.4s cubic-bezier(0.4, 0, 0.2, 1);
}

.portfolio-card:hover {
  transform: translateY(-10px);
  box-shadow: 0 20px 60px rgba(99, 102, 241, 0.25);
}

Glassmorphism, when thoughtfully implemented, creates interfaces that are both beautiful and functional. Balance aesthetics with performance and accessibility for the best user experience!