<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Atul.A.Das</title>
    <description>The latest articles on Forem by Atul.A.Das (@theprofessionalnoob).</description>
    <link>https://forem.com/theprofessionalnoob</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/theprofessionalnoob"/>
    <language>en</language>
    <item>
      <title>Doubt in Resolving Overfitting and Underfitting</title>
      <dc:creator>Atul.A.Das</dc:creator>
      <pubDate>Wed, 28 Feb 2024 10:12:46 +0000</pubDate>
      <link>https://forem.com/theprofessionalnoob/doubt-in-resolving-overfitting-and-underfitting-5b3l</link>
      <guid>https://forem.com/theprofessionalnoob/doubt-in-resolving-overfitting-and-underfitting-5b3l</guid>
      <description>&lt;p&gt;Hey guys so recently, I have been developing an artificial neural network for a binary classification problem. This problem classifies as to whether a particular employee will get promoted or not. This dataset has 54808 rows. I have been using the 80:20 train-test ratio and I have created a model with a drop function to do the same. However, I am getting some really weird results.&lt;br&gt;
The code goes like follows&lt;br&gt;
import torch&lt;br&gt;
import torch.nn as nn&lt;/p&gt;

&lt;p&gt;class ANN(nn.Module):&lt;br&gt;
    def &lt;strong&gt;init&lt;/strong&gt;(self, input_features=5, h1=60, h2=60,h3=45,h4=45, output_features=1,dropout_prob=0.4):&lt;br&gt;
        super().&lt;strong&gt;init&lt;/strong&gt;()&lt;br&gt;
        self.fc1 = nn.Linear(input_features, h1)&lt;br&gt;
        self.relu1 = nn.LeakyReLU()&lt;br&gt;
        self.dropout1 = nn.Dropout(p=dropout_prob,inplace=False)&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    self.fc2 = nn.Linear(h1, h2)
    self.relu2 = nn.LeakyReLU()
    self.dropout2 = nn.Dropout(p=dropout_prob,inplace=False)

    self.fc3 =nn.Linear(h2,h3)
    self.relu3 = nn.LeakyReLU()
    self.dropout3 = nn.Dropout(p=dropout_prob,inplace=False)

    self.fc4 =nn.Linear(h3,h4)
    self.relu4 = nn.LeakyReLU()
    self.dropout4 = nn.Dropout(p=dropout_prob,inplace=False)

    self.output = nn.Linear(h4, output_features)
    self.output_activation_function = nn.Sigmoid()

def forward(self, x):
    x = self.fc1(x)
    x = self.relu1(x)
    x = self.dropout1(x)

    x = self.fc2(x)
    x = self.relu2(x)
    x = self.dropout2(x)

    x = self.fc3(x)
    x = self.relu3(x)
    x = self.dropout3(x)

    x = self.fc4(x)
    x = self.relu4(x)
    x = self.dropout4(x)

    x = self.output(x)
    x = self.output_activation_function(x)
    return x

def flatten_parameters(self):
    flattened_parameters = []
    for param in self.parameters():
        flattened_parameters.append(param.flatten())
    return torch.cat(flattened_parameters)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;model=ANN()&lt;br&gt;
flattened_params = model.flatten_parameters()&lt;br&gt;
print(f"Flattened Parameters:{flattened_params}")&lt;br&gt;
print(f"Shape:{flattened_params.shape}")&lt;br&gt;
from sklearn.model_selection import train_test_split&lt;br&gt;
X_train,X_test,Y_train,Y_test=train_test_split(X_resampled[['awards_won','avg_training_score','previous_year_rating','education','region']],Y_resampled,test_size=0.2,random_state=42)&lt;br&gt;
from sklearn.preprocessing import StandardScaler&lt;br&gt;
scaler=StandardScaler()&lt;br&gt;
X_train_scaled=scaler.fit_transform(X_train)&lt;br&gt;
X_test_scaled=scaler.transform(X_test)&lt;br&gt;
print("Successfully Scaled")&lt;br&gt;
from torch import tensor as tn&lt;br&gt;
X_train_scaled_tensor=tn(X_train_scaled,dtype=torch.float32)&lt;br&gt;
X_test_scaled_tensor=tn(X_test_scaled,dtype=torch.float32)&lt;br&gt;
Y_train_tensor=tn(Y_train,dtype=torch.int64)&lt;br&gt;
Y_test_tensor=tn(Y_test,dtype=torch.int64)&lt;br&gt;
Y_train_tensor = Y_train_tensor.unsqueeze(1)&lt;br&gt;
Y_test_tensor = Y_test_tensor.unsqueeze(1)&lt;/p&gt;

&lt;p&gt;print("Tensors created")&lt;br&gt;
import torch.optim as optim&lt;br&gt;
criterion = nn.BCEWithLogitsLoss()&lt;br&gt;
optimizer = optim.Adam(model.parameters(), lr=0.00001, betas=(0.95, 0.999), eps=1e-7, weight_decay=0.0001, amsgrad=False)&lt;br&gt;
num_epochs = 100&lt;br&gt;
batch_size = 32&lt;br&gt;
print(Y_train_tensor.dtype)&lt;br&gt;
for epoch in range(num_epochs):&lt;br&gt;
    model.train()#Set to training mode&lt;br&gt;
    for i in range(0, len(X_train_scaled_tensor), batch_size): &lt;br&gt;
        outputs = model(X_train_scaled_tensor[i:i + batch_size])&lt;br&gt;
        loss = criterion(outputs, Y_train_tensor[i:i + batch_size].float())&lt;br&gt;&lt;br&gt;
        optimizer.zero_grad()&lt;br&gt;
        loss.backward()&lt;br&gt;
        optimizer.step()&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;model.eval()#Set to evaluation mode
with torch.no_grad():
    outputs = model(X_test_scaled_tensor)
    predictions = torch.round(outputs)  
    accuracy = (predictions == Y_test_tensor).sum().item() / len(Y_test_tensor)
    print(f"Epoch {epoch + 1}, Loss: {loss.item():.4f}, Test Accuracy: {accuracy:.4f}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;THe last part is giving weird results as follows:&lt;br&gt;
torch.int64&lt;br&gt;
Epoch 1, Loss: 0.6607, Test Accuracy: 0.5084&lt;br&gt;
Epoch 2, Loss: 0.6602, Test Accuracy: 0.5084&lt;br&gt;
Epoch 3, Loss: 0.6509, Test Accuracy: 0.5084&lt;br&gt;
Epoch 4, Loss: 0.6422, Test Accuracy: 0.5347&lt;br&gt;
Epoch 5, Loss: 0.6585, Test Accuracy: 0.6086&lt;br&gt;
Epoch 6, Loss: 0.6395, Test Accuracy: 0.6358&lt;br&gt;
Epoch 7, Loss: 0.6488, Test Accuracy: 0.6451&lt;br&gt;
Epoch 8, Loss: 0.6417, Test Accuracy: 0.6504&lt;br&gt;
Epoch 9, Loss: 0.6505, Test Accuracy: 0.6533&lt;br&gt;
Epoch 10, Loss: 0.6433, Test Accuracy: 0.6570&lt;br&gt;
Epoch 11, Loss: 0.6376, Test Accuracy: 0.6598&lt;br&gt;
Epoch 12, Loss: 0.6424, Test Accuracy: 0.6600&lt;br&gt;
Epoch 13, Loss: 0.6412, Test Accuracy: 0.6609&lt;br&gt;
Epoch 14, Loss: 0.6360, Test Accuracy: 0.6622&lt;br&gt;
Epoch 15, Loss: 0.6475, Test Accuracy: 0.6631&lt;br&gt;
Epoch 16, Loss: 0.6541, Test Accuracy: 0.6643&lt;br&gt;
Epoch 17, Loss: 0.6539, Test Accuracy: 0.6653&lt;br&gt;
Epoch 18, Loss: 0.6331, Test Accuracy: 0.6656&lt;br&gt;
Epoch 19, Loss: 0.6458, Test Accuracy: 0.6657&lt;br&gt;
Epoch 20, Loss: 0.6363, Test Accuracy: 0.6661&lt;br&gt;
Epoch 21, Loss: 0.6193, Test Accuracy: 0.6659&lt;br&gt;
Epoch 22, Loss: 0.6422, Test Accuracy: 0.6660&lt;br&gt;
Epoch 23, Loss: 0.6311, Test Accuracy: 0.6673&lt;br&gt;
Epoch 24, Loss: 0.6477, Test Accuracy: 0.6682&lt;br&gt;
Epoch 25, Loss: 0.6207, Test Accuracy: 0.6687&lt;br&gt;
Epoch 26, Loss: 0.6352, Test Accuracy: 0.6710&lt;br&gt;
Epoch 27, Loss: 0.6402, Test Accuracy: 0.6721&lt;br&gt;
Epoch 28, Loss: 0.6323, Test Accuracy: 0.6716&lt;br&gt;
Epoch 29, Loss: 0.6454, Test Accuracy: 0.6732&lt;br&gt;
Epoch 30, Loss: 0.6303, Test Accuracy: 0.6735&lt;br&gt;
Epoch 31, Loss: 0.6361, Test Accuracy: 0.6734&lt;br&gt;
Epoch 32, Loss: 0.6385, Test Accuracy: 0.6745&lt;br&gt;
Epoch 33, Loss: 0.6333, Test Accuracy: 0.6754&lt;br&gt;
Epoch 34, Loss: 0.6469, Test Accuracy: 0.6768&lt;br&gt;
Epoch 35, Loss: 0.6028, Test Accuracy: 0.6780&lt;br&gt;
Epoch 36, Loss: 0.6260, Test Accuracy: 0.6771&lt;br&gt;
Epoch 37, Loss: 0.6230, Test Accuracy: 0.6801&lt;br&gt;
Epoch 38, Loss: 0.6486, Test Accuracy: 0.6790&lt;br&gt;
Epoch 39, Loss: 0.6383, Test Accuracy: 0.6808&lt;br&gt;
Epoch 40, Loss: 0.6248, Test Accuracy: 0.6810&lt;br&gt;
Epoch 41, Loss: 0.6400, Test Accuracy: 0.6811&lt;br&gt;
Epoch 42, Loss: 0.6406, Test Accuracy: 0.6818&lt;br&gt;
Epoch 43, Loss: 0.6053, Test Accuracy: 0.6822&lt;br&gt;
Epoch 44, Loss: 0.6365, Test Accuracy: 0.6824&lt;br&gt;
Epoch 45, Loss: 0.6580, Test Accuracy: 0.6831&lt;br&gt;
Epoch 46, Loss: 0.6454, Test Accuracy: 0.6843&lt;br&gt;
Epoch 47, Loss: 0.6489, Test Accuracy: 0.6845&lt;br&gt;
Epoch 48, Loss: 0.6146, Test Accuracy: 0.6858&lt;br&gt;
Epoch 49, Loss: 0.6071, Test Accuracy: 0.6869&lt;br&gt;
Epoch 50, Loss: 0.6227, Test Accuracy: 0.6866&lt;br&gt;
Epoch 51, Loss: 0.6185, Test Accuracy: 0.6871&lt;br&gt;
Epoch 52, Loss: 0.6240, Test Accuracy: 0.6887&lt;br&gt;
Epoch 53, Loss: 0.6312, Test Accuracy: 0.6887&lt;br&gt;
Epoch 54, Loss: 0.6216, Test Accuracy: 0.6885&lt;br&gt;
Epoch 55, Loss: 0.6287, Test Accuracy: 0.6881&lt;br&gt;
Epoch 56, Loss: 0.6261, Test Accuracy: 0.6892&lt;br&gt;
Epoch 57, Loss: 0.6083, Test Accuracy: 0.6897&lt;br&gt;
Epoch 58, Loss: 0.6348, Test Accuracy: 0.6898&lt;br&gt;
Epoch 59, Loss: 0.6443, Test Accuracy: 0.6901&lt;br&gt;
Epoch 60, Loss: 0.6102, Test Accuracy: 0.6924&lt;br&gt;
Epoch 61, Loss: 0.6331, Test Accuracy: 0.6901&lt;br&gt;
Epoch 62, Loss: 0.6264, Test Accuracy: 0.6910&lt;br&gt;
Epoch 63, Loss: 0.6017, Test Accuracy: 0.6911&lt;br&gt;
Epoch 64, Loss: 0.6241, Test Accuracy: 0.6915&lt;br&gt;
Epoch 65, Loss: 0.6350, Test Accuracy: 0.6927&lt;br&gt;
Epoch 66, Loss: 0.6080, Test Accuracy: 0.6933&lt;br&gt;
Epoch 67, Loss: 0.6064, Test Accuracy: 0.6928&lt;br&gt;
Epoch 68, Loss: 0.6013, Test Accuracy: 0.6930&lt;br&gt;
Epoch 69, Loss: 0.6134, Test Accuracy: 0.6947&lt;br&gt;
Epoch 70, Loss: 0.6079, Test Accuracy: 0.6932&lt;br&gt;
Epoch 71, Loss: 0.6371, Test Accuracy: 0.6936&lt;br&gt;
Epoch 72, Loss: 0.6320, Test Accuracy: 0.6951&lt;br&gt;
Epoch 73, Loss: 0.6258, Test Accuracy: 0.6943&lt;br&gt;
Epoch 74, Loss: 0.6089, Test Accuracy: 0.6949&lt;br&gt;
Epoch 75, Loss: 0.6142, Test Accuracy: 0.6949&lt;br&gt;
Epoch 76, Loss: 0.6109, Test Accuracy: 0.6965&lt;br&gt;
Epoch 77, Loss: 0.6138, Test Accuracy: 0.6972&lt;br&gt;
Epoch 78, Loss: 0.6077, Test Accuracy: 0.6964&lt;br&gt;
Epoch 79, Loss: 0.6300, Test Accuracy: 0.6964&lt;br&gt;
Epoch 80, Loss: 0.6348, Test Accuracy: 0.6976&lt;br&gt;
Epoch 81, Loss: 0.6145, Test Accuracy: 0.6982&lt;br&gt;
Epoch 82, Loss: 0.6276, Test Accuracy: 0.6991&lt;br&gt;
Epoch 83, Loss: 0.6181, Test Accuracy: 0.7001&lt;br&gt;
Epoch 84, Loss: 0.6333, Test Accuracy: 0.6989&lt;br&gt;
Epoch 85, Loss: 0.6119, Test Accuracy: 0.6994&lt;br&gt;
Epoch 86, Loss: 0.5859, Test Accuracy: 0.6993&lt;br&gt;
Epoch 87, Loss: 0.6312, Test Accuracy: 0.7005&lt;br&gt;
Epoch 88, Loss: 0.6394, Test Accuracy: 0.7007&lt;br&gt;
Epoch 89, Loss: 0.6410, Test Accuracy: 0.7014&lt;br&gt;
Epoch 90, Loss: 0.6238, Test Accuracy: 0.7024&lt;br&gt;
Epoch 91, Loss: 0.6405, Test Accuracy: 0.7026&lt;br&gt;
Epoch 92, Loss: 0.6310, Test Accuracy: 0.7029&lt;br&gt;
Epoch 93, Loss: 0.6087, Test Accuracy: 0.7042&lt;br&gt;
Epoch 94, Loss: 0.6277, Test Accuracy: 0.7035&lt;br&gt;
Epoch 95, Loss: 0.6142, Test Accuracy: 0.7045&lt;br&gt;
Epoch 96, Loss: 0.6347, Test Accuracy: 0.7045&lt;br&gt;
Epoch 97, Loss: 0.5915, Test Accuracy: 0.7058&lt;br&gt;
Epoch 98, Loss: 0.6408, Test Accuracy: 0.7059&lt;br&gt;
Epoch 99, Loss: 0.6111, Test Accuracy: 0.7053&lt;br&gt;
Epoch 100, Loss: 0.6109, Test Accuracy: 0.7073&lt;/p&gt;

&lt;p&gt;Is this by any chance an indicator of overfitting/underfitting. If it is then how do I resolve it.&lt;/p&gt;

&lt;p&gt;I have attached the dataset as well with this case. THe features have been chosen with respect to positive correlation values.&lt;br&gt;
[]&lt;a href="https://drive.google.com/file/d/1UMUpuBvJJP1069EI1ZA4mTSA0vq1yi3u/view?usp=sharing"&gt;https://drive.google.com/file/d/1UMUpuBvJJP1069EI1ZA4mTSA0vq1yi3u/view?usp=sharing&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thanks in Advance guys!!!&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>deeplearning</category>
      <category>classification</category>
      <category>neuralnetwor</category>
    </item>
  </channel>
</rss>
