-
-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
closes #12 #14
closes #12 #14
Conversation
Caution Review failedThe pull request is closed. WalkthroughThe changes in this pull request involve modifications to three files. The Changes
Assessment against linked issues
Tip CodeRabbit's docstrings feature is now available as part of our Early Access Program! Simply use the command 📜 Recent review detailsConfiguration used: CodeRabbit UI ⛔ Files ignored due to path filters (2)
📒 Files selected for processing (3)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Outside diff range and nitpick comments (1)
step1_prepare/step1_3_split_data.py (1)
1-5
: Add type hints and docstring for better code documentationConsider adding type hints and a comprehensive docstring to improve code maintainability and documentation.
import os import pandas as pd from sklearn.model_selection import train_test_split -def split_multiple_files(input_directory, train_ratio=0.8): +def split_multiple_files(input_directory: str, train_ratio: float = 0.8) -> None: + """Split multiple CSV files into training and testing datasets. + + Args: + input_directory: Directory containing CSV files to split + train_ratio: Ratio of data to use for training (default: 0.8) + + Raises: + ValueError: If train_ratio is not between 0 and 1 + """ + if not 0 < train_ratio < 1: + raise ValueError("train_ratio must be between 0 and 1")
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (4)
step1_prepare/split/test_step1_1_storybook_learning_events.csv
is excluded by!**/*.csv
step1_prepare/split/test_step1_1_storybooks.csv
is excluded by!**/*.csv
step1_prepare/split/train_step1_1_storybook_learning_events.csv
is excluded by!**/*.csv
step1_prepare/split/train_step1_1_storybooks.csv
is excluded by!**/*.csv
📒 Files selected for processing (3)
requirements.txt
(1 hunks)run_all_steps.py
(1 hunks)step1_prepare/step1_3_split_data.py
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- requirements.txt
run_all_steps.py
Outdated
|
||
# Step 1.3 Split Data | ||
import step1_prepare.step1_3_split_data | ||
step1_prepare.step1_3_split_data.split_multiple_files(input_directory='/Users/nitikabahl/story recemonder/ml-storybook-recommender/step1_prepare/') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace hardcoded absolute path with relative path
The current implementation uses a hardcoded absolute path that:
- Contains spaces which could cause issues
- Is specific to a user's local machine
- Won't work in different environments
Consider using a relative path instead:
-step1_prepare.step1_3_split_data.split_multiple_files(input_directory='/Users/nitikabahl/story recemonder/ml-storybook-recommender/step1_prepare/')
+step1_prepare.step1_3_split_data.split_multiple_files(input_directory='.')
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
step1_prepare.step1_3_split_data.split_multiple_files(input_directory='/Users/nitikabahl/story recemonder/ml-storybook-recommender/step1_prepare/') | |
step1_prepare.step1_3_split_data.split_multiple_files(input_directory='.') |
step1_prepare/step1_3_split_data.py
Outdated
if __name__ == "__main__": | ||
# Example usage for splitting multiple files in a directory | ||
split_multiple_files(input_directory='/Users/nitikabahl/story recemonder/ml-storybook-recommender/step1_prepare/') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove hardcoded path from example usage
The example usage contains the same hardcoded path issue as in run_all_steps.py
.
if __name__ == "__main__":
# Example usage for splitting multiple files in a directory
- split_multiple_files(input_directory='/Users/nitikabahl/story recemonder/ml-storybook-recommender/step1_prepare/')
+ split_multiple_files(input_directory='.')
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if __name__ == "__main__": | |
# Example usage for splitting multiple files in a directory | |
split_multiple_files(input_directory='/Users/nitikabahl/story recemonder/ml-storybook-recommender/step1_prepare/') | |
if __name__ == "__main__": | |
# Example usage for splitting multiple files in a directory | |
split_multiple_files(input_directory='.') |
step1_prepare/step1_3_split_data.py
Outdated
# Get list of all CSV files in the directory | ||
csv_files = [f for f in os.listdir(input_directory) if f.endswith('.csv')] | ||
|
||
# Check if any CSV files were found | ||
if not csv_files: | ||
print("No CSV files found in the directory.") | ||
return |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Enhance input validation and error handling
The current implementation needs more robust input validation and error handling:
- Directory existence check
- Permission validation
- Case-insensitive CSV extension matching
def split_multiple_files(input_directory: str, train_ratio: float = 0.8) -> None:
+ # Validate directory
+ if not os.path.isdir(input_directory):
+ raise ValueError(f"Directory not found: {input_directory}")
+
# Get list of all CSV files in the directory
- csv_files = [f for f in os.listdir(input_directory) if f.endswith('.csv')]
+ try:
+ csv_files = [f for f in os.listdir(input_directory)
+ if f.lower().endswith('.csv')]
+ except PermissionError:
+ raise PermissionError(f"Permission denied accessing: {input_directory}")
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
# Get list of all CSV files in the directory | |
csv_files = [f for f in os.listdir(input_directory) if f.endswith('.csv')] | |
# Check if any CSV files were found | |
if not csv_files: | |
print("No CSV files found in the directory.") | |
return | |
# Validate directory | |
if not os.path.isdir(input_directory): | |
raise ValueError(f"Directory not found: {input_directory}") | |
# Get list of all CSV files in the directory | |
try: | |
csv_files = [f for f in os.listdir(input_directory) | |
if f.lower().endswith('.csv')] | |
except PermissionError: | |
raise PermissionError(f"Permission denied accessing: {input_directory}") | |
# Check if any CSV files were found | |
if not csv_files: | |
print("No CSV files found in the directory.") | |
return |
step1_prepare/step1_3_split_data.py
Outdated
for file in csv_files: | ||
input_file = os.path.join(input_directory, file) | ||
|
||
# Load dataset | ||
print(f"Loading data from {input_file}...") | ||
data = pd.read_csv(input_file) | ||
|
||
# Check if the dataset is empty | ||
if data.empty: | ||
print(f"Warning: {file} is empty. Skipping...") | ||
continue | ||
|
||
# Split data | ||
print(f"Splitting data into {int(train_ratio*100)}% train and {int((1-train_ratio)*100)}% test sets.") | ||
train_data, test_data = train_test_split(data, test_size=(1 - train_ratio), random_state=42) | ||
|
||
# Save splits | ||
output_dir = os.path.join(input_directory, 'split') | ||
os.makedirs(output_dir, exist_ok=True) | ||
|
||
train_output = os.path.join(output_dir, f"train_{file}") | ||
test_output = os.path.join(output_dir, f"test_{file}") | ||
|
||
# Save the split datasets to CSV | ||
train_data.to_csv(train_output, index=False) | ||
test_data.to_csv(test_output, index=False) | ||
|
||
print(f"Data from {file} split and saved successfully.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve error handling and memory efficiency
The main processing loop needs several improvements:
- Error handling for file operations
- Memory optimization for large files
- Progress tracking for multiple files
+ total_files = len(csv_files)
+ for idx, file in enumerate(csv_files, 1):
input_file = os.path.join(input_directory, file)
# Load dataset
- print(f"Loading data from {input_file}...")
+ print(f"Processing file {idx}/{total_files}: {file}")
+ try:
data = pd.read_csv(input_file)
+ except Exception as e:
+ print(f"Error reading {file}: {str(e)}")
+ continue
# Check if the dataset is empty
if data.empty:
print(f"Warning: {file} is empty. Skipping...")
continue
+ # Validate data structure
+ if len(data.columns) == 0:
+ print(f"Warning: {file} has no columns. Skipping...")
+ continue
+
# Split data
print(f"Splitting data into {int(train_ratio*100)}% train and {int((1-train_ratio)*100)}% test sets.")
- train_data, test_data = train_test_split(data, test_size=(1 - train_ratio), random_state=42)
+ try:
+ # Process in chunks for large files
+ chunk_size = 100000 # Adjust based on available memory
+ if len(data) > chunk_size:
+ train_chunks = []
+ test_chunks = []
+ for chunk in pd.read_csv(input_file, chunksize=chunk_size):
+ train_chunk, test_chunk = train_test_split(
+ chunk, test_size=(1 - train_ratio), random_state=42
+ )
+ train_chunks.append(train_chunk)
+ test_chunks.append(test_chunk)
+ train_data = pd.concat(train_chunks)
+ test_data = pd.concat(test_chunks)
+ else:
+ train_data, test_data = train_test_split(
+ data, test_size=(1 - train_ratio), random_state=42
+ )
+ except Exception as e:
+ print(f"Error splitting {file}: {str(e)}")
+ continue
Committable suggestion skipped: line range outside the PR's diff.
Issue Number
Purpose
Technical Details
Testing Instructions
Screenshots
Summary by CodeRabbit
New Features
tqdm
package for progress tracking during file processing.Bug Fixes
Chores
scikit-learn
andtqdm
packages.