A comprehensive neural network implementation project that demonstrates manual feedforward networks and multi-layer perceptrons for predicting heat influx into residential buildings.
Features step-by-step backpropagation calculations, multiple network architectures, and comparative analysis of optimization techniques.
Complete implementation from theory to practice with detailed mathematical derivations.
🤗 Hugging Face Model · 📊 Live Demo · 📚 Documentation · 🐛 Issues
Share Heat Flux Neural Networks
🌟 Pioneering the future of thermal analysis through advanced neural networks. Built for researchers, students, and practitioners.
[!TIP] Explore the visual results showcasing model performance and architectural comparisons.
📊 More Visualizations
Technical Stack:
Important
This project demonstrates advanced neural network implementations with manual backpropagation calculations, multiple architectures (1, 3, 5 hidden neurons), and comprehensive optimization strategies. Perfect for understanding deep learning fundamentals and thermal analysis applications.
📑 Table of Contents
- 🧠 Heat Flux Perceptrons Neural Networks - TOC
We present a comprehensive implementation of neural networks for thermal analysis, bridging the gap between theoretical understanding and practical application. This project encompasses manual implementation of feedforward networks with detailed mathematical derivations, followed by sophisticated multi-layer perceptron models for predicting heat flux in residential buildings.
Whether you're a student learning neural networks, a researcher exploring thermal analysis, or a practitioner implementing deep learning solutions, this project provides valuable insights and practical implementations.
Note
- Python 3.8+ required
- TensorFlow 2.x for advanced models
- Jupyter Notebook for interactive exploration
- Dataset included (319 thermal measurements)
Experience our neural network models for heat flux prediction without any setup required. | |
---|---|
Explore our pre-trained models and datasets on Hugging Face Hub. |
Tip
⭐ Star us to receive all release notifications and stay updated with the latest improvements!
⭐ Star History
Experience neural networks from the ground up with complete mathematical derivations and step-by-step backpropagation calculations. Our manual implementation demonstrates every detail of the learning process.
Key capabilities include:
- 🧮 Mathematical Precision: Complete derivative calculations
- 📊 Step-by-Step Training: Example-by-example weight updates
- 🔍 Detailed Analysis: Error propagation visualization
- 📚 Educational Value: Perfect for learning fundamentals
Tip
The manual implementation includes detailed mathematical formulations:
- Forward pass:
z = sigmoid(Σ(wi*xi + bi))
- Backward pass:
∂E/∂w = (target - output) * sigmoid'(net) * input
- Weight update:
w_new = w_old - α * ∂E/∂w
Revolutionary thermal analysis system that predicts heat influx in residential buildings using advanced neural network architectures. Multiple configurations tested and optimized for maximum accuracy.
Architecture Variants:
- 1 Hidden Neuron: Minimal complexity baseline model
- 3 Hidden Neurons: Balanced performance and complexity
- 5 Hidden Neurons: Maximum capacity configuration
Optimization Strategies:
- SGD with Momentum: Various learning rates (0.1, 0.5, 0.9) and momentum values
- Adaptive Methods: Adagrad optimizer for improved convergence
Beyond the core implementations, this project includes:
- 🔬 Comprehensive Experiments: 15 different model configurations tested
- 📊 Advanced Metrics: MSE, R², correlation analysis across train/val/test sets
- 🎯 Hyperparameter Optimization: Systematic exploration of learning rates and momentum
- 📈 Visualization Suite: Training curves, prediction comparisons, scatter plots
- 💾 Model Persistence: Best models saved in Keras format
- 📋 Detailed Documentation: Mathematical derivations and implementation notes
- 🔄 Reproducibility: Fixed seeds and comprehensive result logging
- 🏠 Real-world Application: Actual thermal data from residential buildings
✨ All experiments are fully documented with results and analysis included.
Core Technologies:
- Python: Primary programming language for all implementations
- TensorFlow/Keras: Deep learning framework for advanced models
- NumPy: Numerical computing and manual neural network implementation
- Pandas: Data manipulation and analysis
- Matplotlib/Seaborn: Data visualization and result plotting
Machine Learning Stack:
- Scikit-learn: Data preprocessing, train/test splits, evaluation metrics
- Manual Implementation: Custom gradient descent and backpropagation
- Optimization: SGD with momentum, Adagrad adaptive learning
- Evaluation: MSE, R-squared, correlation analysis
Development Environment:
- Jupyter Notebooks: Interactive development and experimentation
- Version Control: Git for tracking experiments and results
- Documentation: Markdown with mathematical notation support
Tip
Each component was selected for educational value and production readiness, enabling both learning and practical application.
Tip
Our architecture supports both manual implementation for educational purposes and TensorFlow implementation for production-grade performance.
graph TB
subgraph "Input Layer"
I1[Insulation]
I2[East Orientation]
I3[South Orientation]
I4[North Orientation]
end
subgraph "Hidden Layer Options"
H1[1 Neuron]
H3[3 Neurons]
H5[5 Neurons]
end
subgraph "Output Layer"
O1[Heat Flux Prediction]
end
subgraph "Activation Functions"
A1[Sigmoid Hidden]
A2[Linear Output]
end
I1 --> H1
I2 --> H1
I3 --> H1
I4 --> H1
I1 --> H3
I2 --> H3
I3 --> H3
I4 --> H3
I1 --> H5
I2 --> H5
I3 --> H5
I4 --> H5
H1 --> A1
H3 --> A1
H5 --> A1
A1 --> O1
O1 --> A2
sequenceDiagram
participant D as Raw Data
participant P as Preprocessing
participant S as Data Split
participant M as Model Training
participant E as Evaluation
participant V as Visualization
D->>P: Load CSV Data (319 samples)
P->>P: MinMax Scaling (0-1)
P->>S: Normalize Features & Target
S->>S: 60% Train / 20% Val / 20% Test
S->>M: Feed Training Data
M->>M: Forward Pass (Sigmoid)
M->>M: Backward Pass (Gradient Descent)
M->>M: Weight Updates
M->>E: Model Predictions
E->>E: Calculate MSE, R²
E->>V: Generate Plots & Analysis
V->>V: Save Results & Models
Project Structure:
├── Assignment2_part1.ipynb # Manual NN implementation
├── Assignment2_part2_1.ipynb # Data exploration & setup
├── Assignment2_part2_2_(i)_a_1HiddenNeurons.ipynb # 1 neuron model
├── Assignment2_part2_2_(i)_b_3HiddenNeurons.ipynb # 3 neuron model
├── Assignment2_part2_2_(i)_b_5HiddenNeurons.ipynb # 5 neuron model
├── Assignment2_part2_2_(i)_c&d.ipynb # Optimization comparison
├── Assignment2_part2_2_(ii).ipynb # Advanced analysis
├── Assignment2_part2_2_(iii).ipynb # Final evaluation
├── Heat_Influx_insulation_east_south_north.csv # Dataset
├── best_ffnn_model.keras # Best SGD model
├── best_heat_flux_model.keras # Best overall model
├── best_heat_flux_model_adagrad.keras # Best Adagrad model
├── ffnn_trials_results.csv # All experiment results
└── *.png # Visualization outputs
Note
Complete performance analysis available across all model configurations and optimization strategies
Model Configuration | Test MSE | Test R² | Validation R² | Optimizer |
---|---|---|---|---|
1 Hidden + SGD (LR=0.1, M=0.9) | 0.002905 | 0.9588 | 0.9178 | SGD + Momentum |
3 Hidden + SGD Optimized | 0.003120 | 0.9542 | 0.9201 | SGD + Momentum |
5 Hidden + Adagrad | 0.003354 | 0.9485 | 0.9156 | Adagrad |
📊 Detailed Performance Analytics
Training Configuration Results:
Trial | Learning Rate | Momentum | Hidden Neurons | Test MSE | Test R² | Status |
---|---|---|---|---|---|---|
A | 0.1 | 0.1 | 1 | 0.004521 | 0.9360 | ✅ Good |
B | 0.1 | 0.9 | 1 | 0.002905 | 0.9588 | 🏆 Best |
C | 0.5 | 0.5 | 1 | 0.004553 | 0.9052 | ✅ Good |
D | 0.9 | 0.1 | 1 | 0.005987 | 0.9152 | |
E | 0.9 | 0.9 | 1 | 0.070771 | -0.0026 | ❌ Poor |
Key Performance Insights:
- 🎯 Best Configuration: Learning Rate 0.1 with Momentum 0.9
- 📊 R² Score: Up to 95.88% variance explained
- ⚡ Convergence: Optimal balance between learning rate and momentum
- 🔄 Stability: Early stopping prevents overfitting
Performance Optimizations:
- 🎯 Hyperparameter Tuning: Systematic exploration of 15 configurations
- 📦 Early Stopping: Prevents overfitting with patience=30
- 🖼️ Data Normalization: MinMax scaling for stable training
- 🔄 Cross-Validation: Multiple seeds for robust evaluation
Note
Performance metrics demonstrate superior accuracy in thermal prediction with R² scores exceeding 95% for optimized configurations.
Important
Ensure you have the following installed for optimal experience:
- Python 3.8+ (Download)
- Jupyter Notebook/Lab (Installation Guide)
- Git (Download)
1. Clone Repository
git clone https://github.com/ChanMeng666/heat-flux-perceptrons-neural-networks.git
cd heat-flux-perceptrons-neural-networks
2. Install Dependencies
# Create virtual environment (recommended)
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install required packages
pip install tensorflow numpy pandas matplotlib seaborn scikit-learn jupyter
3. Launch Jupyter Environment
jupyter notebook
# Or for JupyterLab
jupyter lab
🎉 Success! Open the notebooks in your browser and start exploring!
Recommended Execution Order:
Assignment2_part1.ipynb
- Start with manual neural network implementationAssignment2_part2_1.ipynb
- Explore the heat flux datasetAssignment2_part2_2_(i)_a_1HiddenNeurons.ipynb
- Single neuron baselineAssignment2_part2_2_(i)_b_3HiddenNeurons.ipynb
- Three neuron architectureAssignment2_part2_2_(i)_b_5HiddenNeurons.ipynb
- Five neuron architectureAssignment2_part2_2_(i)_c&d.ipynb
- Optimization comparisonAssignment2_part2_2_(ii).ipynb
- Advanced analysisAssignment2_part2_2_(iii).ipynb
- Final evaluation and conclusions
Tip
Each notebook is self-contained but builds upon previous concepts. Run them sequentially for the best learning experience.
Our dataset contains 319 thermal measurements from residential buildings with the following features:
Feature | Description | Range | Correlation with Heat Flux |
---|---|---|---|
Insulation | Thermal insulation thickness (mm) | 568.55 - 909.45 | +0.6276 (Strong Positive) |
East | East-facing surface area (m²) | 31.08 - 37.82 | +0.1024 (Weak Positive) |
South | South-facing surface area (m²) | 31.84 - 40.55 | +0.1121 (Weak Positive) |
North | North-facing surface area (m²) | 15.54 - 19.05 | -0.8488 (Strong Negative) |
Heat Flux | Target: Heat influx (W/m²) | 181.5 - 278.7 | Target Variable |
Key Insights:
- 🏠 North Orientation: Strongest predictor (negative correlation -0.8488)
- 🧱 Insulation: Second strongest predictor (positive correlation +0.6276)
- 📊 Data Quality: No missing values, well-distributed features
- ⚖️ Preprocessing: MinMax normalization (0-1 range) for stable training
# Data preprocessing workflow
def preprocess_data(data, seed=42):
features = ['Insulation', 'East', 'South', 'North']
target = 'HeatFlux'
# MinMax scaling
scaler = MinMaxScaler()
normalized_data = scaler.fit_transform(data[features + [target]])
# Train/Val/Test split (60/20/20)
train_data, temp_data = train_test_split(normalized_data, train_size=0.6, random_state=seed)
val_data, test_data = train_test_split(temp_data, train_size=0.5, random_state=seed)
return train_data, val_data, test_data
Objective: Implement neural network training from scratch with detailed mathematical derivations.
Implementation Details:
- Network: 1 input → 1 hidden → 1 output
- Activation: Sigmoid function throughout
- Training: Example-by-example weight updates
- Learning Rate: β = 0.1
Mathematical Formulations:
# Forward Pass
u1 = a0 + a1 * x1 # Hidden layer weighted sum
y1 = sigmoid(u1) # Hidden layer activation
v1 = b0 + b1 * y1 # Output layer weighted sum
z1 = sigmoid(v1) # Output layer activation
# Backward Pass
p1 = (z1 - t1) * sigmoid_derivative(v1) # Output layer gradient
q1 = p1 * b1 * sigmoid_derivative(u1) # Hidden layer gradient
# Weight Updates
delta_b0 = -beta * p1
delta_b1 = -beta * p1 * y1
delta_a0 = -beta * q1
delta_a1 = -beta * q1 * x1
Experiment Matrix: 15 different configurations tested
Architecture | Optimizer | Learning Rates | Momentum Values | Total Configs |
---|---|---|---|---|
1 Hidden Neuron | SGD | [0.1, 0.5, 0.9] | [0.1, 0.9] | 5 |
3 Hidden Neurons | SGD | [0.1, 0.5, 0.9] | [0.1, 0.9] | 5 |
5 Hidden Neurons | SGD | [0.1, 0.5, 0.9] | [0.1, 0.9] | 5 |
Best Architecture | Adagrad | Adaptive | N/A | Additional |
Key Findings:
- 🏆 Best Performance: 1 Hidden Neuron with LR=0.1, Momentum=0.9
- 📊 Accuracy: 95.88% R² score on test set
- ⚡ Convergence: Optimal balance prevents overfitting
- 🔄 Robustness: Consistent performance across multiple seeds
Basic Model Training:
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import SGD
# Define model architecture
model = Sequential([
Dense(hidden_neurons, activation='sigmoid', input_shape=(4,)),
Dense(1, activation='linear')
])
# Configure optimizer
optimizer = SGD(learning_rate=0.1, momentum=0.9)
model.compile(loss='mean_squared_error', optimizer=optimizer)
# Train model
history = model.fit(
X_train, y_train,
epochs=500,
batch_size=10,
validation_data=(X_val, y_val),
callbacks=[early_stopping]
)
Comprehensive Evaluation Pipeline:
# Evaluate across all data splits
def evaluate_comprehensive(model, X_train, y_train, X_val, y_val, X_test, y_test):
predictions = {
'train': model.predict(X_train).flatten(),
'val': model.predict(X_val).flatten(),
'test': model.predict(X_test).flatten()
}
metrics = {}
for split, y_true in [('train', y_train), ('val', y_val), ('test', y_test)]:
y_pred = predictions[split]
metrics[f'MSE_{split}'] = mean_squared_error(y_true, y_pred)
metrics[f'R2_{split}'] = r2_score(y_true, y_pred)
return metrics
Advanced Hyperparameter Optimization:
# Grid search configuration
param_grid = {
'learning_rate': [0.1, 0.5, 0.9],
'momentum': [0.1, 0.9],
'hidden_neurons': [1, 3, 5],
'batch_size': [10, 20, 32]
}
# Systematic evaluation
best_config = optimize_hyperparameters(param_grid, X_train, y_train, X_val, y_val)
We welcome contributions to enhance this neural network implementation! Here's how you can help:
1. Fork & Clone:
git clone https://github.com/YourUsername/heat-flux-perceptrons-neural-networks.git
cd heat-flux-perceptrons-neural-networks
2. Create Feature Branch:
git checkout -b feature/your-enhancement
3. Make Improvements:
- 📊 Add new visualization techniques
- 🧠 Implement additional neural network architectures
- 🔧 Improve optimization algorithms
- 📚 Enhance documentation and tutorials
- 🧪 Add more comprehensive testing
4. Submit Pull Request:
- Provide clear description of enhancements
- Include performance comparisons
- Add appropriate documentation
- Ensure all notebooks run successfully
Research Enhancements:
- 🔬 New optimization algorithms (Adam, RMSprop)
- 🏗️ Advanced architectures (CNN, RNN for time series)
- 📈 Additional evaluation metrics
- 🎯 Hyperparameter optimization techniques
Educational Improvements:
- 📚 More detailed mathematical explanations
- 🎓 Interactive tutorials and exercises
- 📊 Additional visualization techniques
- 🔍 Step-by-step debugging guides
|
---|
Support the development of advanced neural network educational resources and help us create more comprehensive deep learning tutorials!

Sponsorship Benefits:
- 🎯 Priority Support: Get faster responses to questions
- 🚀 Early Access: Preview new implementations before release
- 📊 Custom Tutorials: Request specific neural network topics
- 🏷️ Recognition: Your name/logo in project documentation
- 💬 Direct Communication: Access to development discussions
This project is licensed under the MIT License - see the LICENSE file for details.
Open Source Benefits:
- ✅ Commercial and educational use allowed
- ✅ Modification and redistribution permitted
- ✅ Private use encouraged
- ✅ No warranty or liability requirements
![]() Chan Meng Creator & Lead Developer Neural Networks Researcher |
Chan Meng
LinkedIn: chanmeng666
GitHub: ChanMeng666
Email: [email protected]
Website: chanmeng.live
Research Interests:
- 🧠 Deep Learning: Neural network architectures and optimization
- 🏠 Thermal Analysis: Building energy efficiency and heat transfer
- 📚 Educational Technology: Making complex concepts accessible
- 🔬 Applied Research: Bridging theory and practical applications
Empowering students, researchers, and practitioners worldwide
⭐ Star us on GitHub • 📖 Explore the Notebooks • 🐛 Report Issues • 💡 Request Features • 🤝 Contribute
Made with ❤️ by the Heat Flux Neural Networks team