2nd Year Weekly Task 1
Welcome to the Cognizance Weekly Task, Solve the given set of problems to earn XP Points
Note: You are not compelled to do all the questions from all the domains, But the more you do the more you learn and gain XP Points.
Open Source
Authors - Jahnavi and Nehal Khan
API
Question
You are tasked with creating a REST API using any programming language of your choice.
Objective:
- The API can perform any function of your choice.
- The main goal of this task is to learn what is REST API and what are its uses.
References:
Submission Guidelines:
- Upload a video demonstrating the code on any platform (YouTube, Drive, etc.)
- The video must have public access if uploaded in the drive. The task will not be considered for evaluation if it has private access.
- Submit the code in the GitHub directory.
- Note: The video creation/editing would also be noticed. It does not matter if you are bad at it, there is always a beginning and room for improvement. BE CONFIDENT!
Cyber Security
Authors - Keerthi Rohan and Subramanian
Question 1
The Bandit Heist
Objective:
Your mission is to infiltrate The Bandit’s server and progress through a series of challenges to gain insights into their methods. Successfully completing each level will bring you closer to uncovering the tools and techniques used by The Bandits.
Submission Guidelines:
- Include your name and the task name in the file.
- Provide a detailed explanation of the steps taken to solve at least the first 5 levels.
- Include the commands used and their purposes.
- Specify the highest level you reached in the game.
- Mention any significant learnings or techniques discovered during the challenge.
- (Bonus) Share any tools/scripts developed to automate tasks.
- Create a GitHub repository and upload both the PDF file and any scripts/tools created.
- Submit the GitHub repository link in the forms.
(PDF Should contain Submission guidelines 1, 2, 3, and 4)
Guidance:
If you are stuck or at a dead end while solving a challenge, you may look up solutions online to understand the approach. However, proceed with caution: use these solutions as a learning aid and not as a shortcut. Attempt to solve the challenges honestly to maximize your learning experience.
Start The Game:
References:
- Reference - 1: Basic Linux Commands
- Reference - 2: Secure Shell (SSH) Guide
Note: Complete as much of the challenge as you can, and document your process in detail.
Competitive Programming
Authors - Kaushik Kumbhat and Adithiyan P V
The below HackerRank contest contains 4 questions based on linked lists. Refer to the links provided below or explore other YouTube channels to learn the concepts. Feel free to contact any of the mentors if you have doubts regarding the concepts or your code.
Contest:
Best of luck!
Note: Kindly don’t use AI to code.
Reference Links:
Hints for the Problems in Contest:
- Cycle Detection: Use two pointers moving at different speeds.
- Reverse: Manipulate the links, keeping the nodes as they are.
Artificial Intelligence
Authors - Jagaadhep U K and Ragi Pranav
Task 1: Supervised Learning with TensorFlow
Objective:
Build a deep learning classification model to predict heart disease risk. This task will help you understand the fundamentals of building a deep learning pipeline using TensorFlow, preprocessing data, training a model, and evaluating its performance.
Supervised learning involves training a model on labeled data, where the objective is to learn the mapping from input features to a target output. This task emphasizes the use of TensorFlow for building and training a deep learning model.
Dataset:
Heart Disease Dataset (available in TensorFlow Datasets, converted to CSV).
Steps:
Load the Dataset
- Load the Heart Disease dataset from TensorFlow Datasets.
- Convert it into CSV format for preprocessing.
Code:
import tensorflow_datasets as tfds |
Preprocess the Data
- Load the CSV file using Pandas.
- Handle missing values.
- Normalize numerical features.
- Encode categorical variables using one-hot encoding.
Build and Train a Deep Learning Model
- Define a TensorFlow Sequential model with:
- Dense layers with ReLU activation.
- Output layer with sigmoid activation for binary classification.
- Compile the model with binary_crossentropy loss and the Adam optimizer.
- Train the model using the training set for 20 epochs with a batch size of 32.
- Define a TensorFlow Sequential model with:
Evaluate the Model
- Split the data into training and testing sets (80-20 split).
- Evaluate the model’s performance using accuracy and F1-score on the test set.
Deliverables:
- A Google Colab Notebook containing:
- Code for data preprocessing, model creation, training, and evaluation.
- Comments explaining each step.
- A summary of the model’s accuracy and F1-score.
Task 2: Unsupervised Learning with Scikit-learn
Objective:
Perform clustering on wine quality data to group samples based on their physicochemical properties. This task focuses on understanding clustering, an unsupervised learning technique where data is grouped into clusters based on feature similarity.
Dataset:
Wine Quality Dataset (from Scikit-learn, converted to CSV).
Steps:
Load the Dataset
- Use Scikit-learn to load the Wine Quality dataset and save it as a CSV file.
Code:
from sklearn.datasets import fetch_openml |
Preprocess the Data
- Load the CSV file in Pandas.
- Standardize the features using StandardScaler to ensure all features are on the same scale.
Apply KMeans Clustering
- Set the number of clusters to 3.
- Use Scikit-learn’s KMeans algorithm to cluster the data.
Visualize the Clusters
- Use a scatter plot to visualize two features (e.g., alcohol and acidity) and color-code the clusters.
Deliverables:
- A Google Colab Notebook containing:
- Code for data preprocessing and clustering.
- Comments explaining each step.
- Visualizations of the clusters.
References
General References:
Supervised Learning:
Unsupervised Learning:
Specific Techniques:
- Data Splitting: Train-Test Split
- Handling Missing Values: YouTube Guide
- Scaling Data: StandardScaler vs MinMaxScaler
Note: Complete each task in Google Colab and upload your notebooks to your GitHub profile.
Submission
Deadline - 1st January 2025, 23:59
NOTE: Create a GitHub Repository named “Cognizance_2nd_Year_Task_1”. For each domain create a separate directory namely “OS”, “AI”, “CYS”, and “CP”. For each domain question, create a sub-directory, “Q1”, and “Q2”. Finally, update the relevant files in these directories and fill out the submission form by providing the links to these repos
└── Cognizance_2nd_Year_Task_1 |