Pediatric Sleep Patterns Detection from Wrist Activity Using Random Forests
Introduction
This research explores the development of a predictive model using wrist-worn accelerometer data to determine a person’s sleep state. Leveraging datasets from the Healthy Brain Network, this study will delve into the nuances of sleep onset and wake events, aiming to revolutionize the understanding of sleep in children.
A total of three data sets will be used in this project, namely train_series.parquet, test_series.parquet, and train_events.csv.
* The Zzzs_train.parquet dataset contains all series to be used as training data. One thing to note is that each series is a continuous recording of accelerometer data for a single subject spanning many days. This dataset has a total of 5 columns, and each column represents an attribute of that accelerometer series. These attributes include series_id
- Unique identifier, step
- An integer timestep for each observation within a series, timestamp
- A corresponding datetime with ISO 8601 format %Y-%m-%dT%H:%M:%S%z
, anglez
, z-angle is a metric derived from individual accelerometer components that is commonly used in sleep detection, and refers to the angle of the arm relative to the vertical axis of the body, as well as enmo
. ENMO is the Euclidean Norm Minus One of all accelerometer signals, with negative values rounded to zero. * The test_series.parquet dataset contains series to be used as the test data, containing the same fields as above. I will predict event occurrences for series in this file. * The train_events.csv file is a complemented dataset that logs specific sleep events such as sleep onset and wake times. This dataset is pivotal in understanding the transitional moments in sleep and serves as a key component in training and refining the predictive models. To be more specific, there are a total of 5 columns, and each column(except the first column, which is just a column of series_id of 12 digits combination of numbers and characters) represents an unique identifier for each series of accelerometer data in Zzzs_train.parquet. These attributes include basic statistics such as night
- An enumeration of potential onset
/ wakeup
event pairs. At most one pair of events can occur for each night, and event
- The type of event, whether onset
or wakeup
, as well as step
and timestamp
, which is the recorded time of occurence of the event in the accelerometer series.
The sleep onset and wakeup timeline plot and the boxplot of sleep duration by day of week motivate the exploration into the sleep patterns. Therefore, central to this research is the question: How can we leverage a machine learning model, trained on wrist-worn accelerometer data, to effectively discern and predict individual sleep patterns and disturbances? Addressing this question can help interpret accelerometer data and have the potential to inform interventions and strategies in child psychology and sleep medicine, offering a new lens through which we can view and understand sleep.
Methodology
Data Preparation & Feature Engineering: The dataset was cleaned thoroughly, prepared for modeling, addressing missing values through deletion. Feature selection was based on the exploratory analysis towards the data distribution patterns, correlation analysis and also the domain knowledge that the wearables position and the sleep hour habits are influential factors to a person’s sleep patterns. Based on these, several key features were extracted from the accelerometer data, including anglez
and enmo
metrics, which are indicators of the wearer’s movement intensity and orientation, step
, event
, hour
and series_id
. These must be crucial factors to be included in the model. Besides, normalization/standardization of data was implemented after exploratory data analysis but before model training to ensure equal contribution of each feature, preventing bias towards variables with larger magnitudes.
From the explortary data analysis on the train_events dataset, the sleep onset and wakeup timeline plot shows as time advanced from late 2017 through to July 2019[Figure1], the observed sleeping patterns exhibit a notable degree of stability and consistency. Predominantly, the awakening time clusters around the 500-minute mark post-midnight, which translates to 8:20 AM. In contrast, the commencement of sleep predominantly spans from 1320 to 1440 minutes after midnight, stretching slightly into the early hours and encapsulating the timeframe from 10:00 PM to 0:30 AM. These findings are in harmonious alignment with conventional sleep schedules typically adhered to, reinforcing their validity within the context of established sleep norms. This detected sleep pattern motivate the model development.
There are some findings from data wrangling as well. Exploring weekly patterns in sleep duration by classifying the data according to each day of the week is also a worthwhile task. The boxplot[Figure2] revealed that sleep durations are generally similar across weekdays and weekends, with two notable exceptions. Saturday showed a marginally longer sleep duration compared to other days, while Thursday emerged as the day with the least amount of sleep. This observation aligns with common expectations, as Thursdays, being mid-week, often involve intensive workloads in anticipation of the weekend, potentially leading to reduced sleep. Conversely, Saturdays provide an opportunity for extended rest and recuperation, especially given the possibility of waking up later on Sunday mornings. However, sleep duration on Sundays does not significantly extend, likely due to the need to wake up early on Mondays. But there’s no much variation among days of week, we can dismiss the day of week factor in our model development.
The average sleep duration by hour of onset plot[AppendixB: Figure5] clearly illustrates a distinct trend: as the bedtime shifts to a later hour, there is a corresponding decrease in the total duration of sleep. This pattern suggests that later sleep onset times are often not compensated by equivalent delays in waking up, resulting in shorter overall sleep periods.
The density plot for sleep onset and wake-up times over time[AppendixB: Figure6] clearly reveals distinct peak periods for each. Wake-up times predominantly peak between 6 to 7 AM, whereas sleep onset times are more broadly distributed, ranging from 10 PM to an hour past midnight. This finding aligns with our previous plots depicting sleep onset and wake-up timelines.
Model Choice: After researching many related literature, the study employs the Random Forest algorithm, selected for its robust performance in complex, high-dimensional classification tasks, and widely used in sleep detection tasks. This choice is preferable over alternatives like logistic regression or support vector machines due to the algorithm’s capability to handle non-linear relationships and provide insights into feature importance, which is crucial for the analysis. I started the codes from scratch in my own way, combing both R and Python tools, implemented a fine-tuned data model, and evaluated in details.
Validation and Testing: The model was trained on the train_series
dataset and validated using a subset of the data. The final model was then tested on the test_series
dataset to predict sleep states. A confidence score metric quantifying the model’s certainty in its predictions about specific sleep-related events, such as the onset or cessation (wakeup) of sleep was derived from model’s probability predictions. It takes the highest probability from the set of probabilities predicted for each data point, reflecting the model’s most confident prediction.
Results
The model achieves peak accuracy with a small number of predictors. This indicates that a few predictors may be highly informative and sufficient to capture the necessary pattern in the data for accurate predictions. As more predictors are added beyond the optimal point, accuracy declines, which can be indicative of overfitting. The model starts to learn the noise in the training data rather than the underlying pattern. In this case, using 3 predictors might be the optimal complexity for the model.
I started training the model with 100 trees and tuning the number of predictors to be sampled between 1 and 5 with 1 as the increment. It turned out that randomly sampling 3 predictors and predicting 335 trees maximized and stabilized the accuracy of model prediction, which is 0.9827, and in this setting, the out-of-bag error rate is 0.0173.
The ROC curve for the model[AppendixB: Figure7] truly looks like a nearly perfect upper triangle, it suggests that the model has near-perfect classification accuracy. This might indicate some form of data leakage or overfitting. So I checked precision, recall, F1 score, and confusion matrices to see whether this might be an imbalanced dataset. However, the accuracy is at 98.34%, Kappa is 0.9669, both recall and specificity are above 98%, precision and negative predictive values are high, balanced accuracy indicating consistent performance, F1 score is at around 0.98. High performance across all these metrics does not suggest a pressing need for sampling methods to address class imbalance.
Lastly, I applied an evaluation metric for event detection in time series and video namely Event Detection Average Precision(EDAP) to the testing set predictions, and got a high sore. Its IOU threshold with tolerance can be replaced. The timestamp error tolerance are custom defined, [12, 36, 60, 90, 120, 150, 180, 240, 300, 360] for onset and same for wakeup event. For each event × tolerance group, the Average Precision (AP) score is calculated, which is the area under the precision-recall curve generated by decreasing confidence score thresholds over the predictions. Multiple AP scores are first averaged over tolerance, then over event to produce a single overall score.
The feature importance plot[AppendixB: Figure8] shows the top3 most influential features were identified as hour of the day
, enmo
, and anglez
, indicating the significance of movement intensity, orientation, and time in determining sleep states. The hour is most influential makes sense since the sleeping patterns are associated with the time hour for sure, and usually a rountine for many people.
The final submission file resembles the example in Appendix C, which is the results tested on a small dataset, where each series has its own onset and wakeup timepoints (indicated by ‘step’) along with a prediction confidence score. The sample results are also consistent with our knowledge, lower confidence score, more likely for a false report. For instance, the indicated potential onset events during noon and afternoon have pretty low confidence score. It is the fact that few people go to bed such early but may get up in the afternoon. One method to determine the confidence score is provided, in addition to this, filtering methods are also experimented in this project, which can be employed to detect only those events where the time gap between onset and wakeup exceeds a certain number of hours (e.g., 6 hours; this threshold can be adjusted based on needs), or by requiring that the confidence score for either onset or wakeup is above 60%. If these criteria are not met, it is likely that no event occurred within the 2.5-hour recording period of that series.
Conclusion
This study successfully demonstrates the potential of using accelerometer data in sleep state detection. Several methods and parameters desgined for the wearables settings are applied, such as the timestamp error tolerance, etc. The high accuracy, AUC scores, event average precision, and recall achieved by the Random Forest model highlight its effectiveness. An exploration into the GGIR package specially for accelerometer data is provided as well, but in this study, even I took methods to transform raw data parquet into csv, the GGIR has very strict requirements for data format. My implementation without using GGIR is more suitable for these datasets. However, a critical limitation of the current model, which employs a random forest algorithm, pertains to its treatment of data. The model operates under the premise that each data point, or timepoint, is an isolated event, devoid of temporal correlation with preceding or subsequent data points. This assumption is a significant departure from the inherently sequential and interdependent nature of sleep patterns. In the realm of sleep studies, where temporal sequences and the continuity of data play pivotal roles, this approach might oversimplify complex biological processes. Therefore, the model may not fully capture the nuanced dynamics of sleep transitions. To enhance the model’s predictive accuracy and clinical relevance, future research should focus on incorporating techniques that recognize and integrate the temporal correlations inherent in sleep patterns. Furthermore, future work could include the application of this model to a broader dataset and exploring other combination techniques (eg. RNN+LSTM) for potential accuracy enhancement. Throughout this research, privacy and data security standards were maintained, and strict adherence to ethical guidelines was ensured to prevent any data misuse, underscoring its sole use in enhancing pediatric healthcare research and practices.
References
Cole, R. J., Kripke, D. F., Gruen, W., Mullaney, D. J., & Gillin, J. C. (1992). Automatic sleep/wake identification from wrist activity. Sleep, 15(5), 461–469. https://doi.org/10.1093/sleep/15.5.461
de Zambotti, M., Cellini, N., Goldstone, A., Colrain, I. M., & Baker, F. C. (2019). Wearable Sleep Technology in Clinical and Research Settings. Medicine and science in sports and exercise, 51(7), 1538–1557. https://doi.org/10.1249/MSS.0000000000001947
Nathalia Esper, Maggie Demkin, Ryan Hoolbrok, Yuki Kotani, Larissa Hunt, Andrew Leroux, Vincent van Hees, Vadim Zipunnikov, Kathleen Merikangas, Michael Milham, Alexandre Franco, Gregory Kiar..(2023). Child Mind Institute - Detect Sleep States. Kaggle. https://kaggle.com/competitions/child-mind-institute-detect-sleep-states
Sundararajan, K., Georgievska, S., Te Lindert, B. H. W., Gehrman, P. R., Ramautar, J., Mazzotti, D. R., Sabia, S., Weedon, M. N., van Someren, E. J. W., Ridder, L., Wang, J., & van Hees, V. T. (2021). Sleep classification from wrist-worn accelerometer data using random forests. Scientific reports, 11(1), 24. https://doi.org/10.1038/s41598-020-79217-x
Appendix
Appendix A: Additional Dataset Details
Detailed Dataset Information
Train Events (train_events.csv)
Accessible through this link, this dataset comprises sleep logs from accelerometer devices, documenting onset and wake events. It contains five columns, including night
(enumeration of potential onset/wakeup event pairs), event
(type of event), step
, and timestamp
.
Training Data (Zzzs_train.parquet)
Available here, this dataset includes continuous accelerometer recordings. It features metrics like series_id
, step
, timestamp
, anglez
, and enmo
, the latter two being crucial metrics for sleep detection as described by the GGIR package.enmo
(Euclidean Norm Minus One with negative values rounded to zero) is an acceleration metric describing physical activities. anglez
is the angle of the arm relative to the vertical axis of the body.
Test Data (test_series.parquet)
This dataset, used for testing, mirrors the structure of the training data. It can be accessed here.
Appendix B: Additional Plots
Confusion Matrix and Statistics
Reference
Prediction 0 1
0 148 2
1 3 149
Accuracy : 0.9834
95% CI : (0.9618, 0.9946)
No Information Rate : 0.5
P-Value [Acc > NIR] : <2e-16
Kappa : 0.9669
Mcnemar's Test P-Value : 1
Sensitivity : 0.9801
Specificity : 0.9868
Pos Pred Value : 0.9867
Neg Pred Value : 0.9803
Prevalence : 0.5000
Detection Rate : 0.4901
Detection Prevalence : 0.4967
Balanced Accuracy : 0.9834
'Positive' Class : 0
[1] "Precision: 0.986666666666667"
[1] "Recall: 0.980132450331126"
[1] "F1 Score: 0.983388704318937"
Appendix C: Tests on A Small Sample Data
# A tibble: 8 × 5
row_id series_id step event score
<dbl> <chr> <dbl> <chr> <dbl>
1 0 038441c925bb 124 onset 0.958
2 1 038441c925bb 922 wakeup 0.901
3 2 038491c925aa 315 onset 0.731
4 3 038491c925aa 478 wakeup 0.949
5 4 03d92c9f6f8a 730 onset 0.2
6 5 03d92c9f6f8a 724 wakeup 0.955
7 6 0402a003dae9 842 onset 0.233
8 7 0402a003dae9 839 wakeup 0.949
Appendix D: Code Details
#- Load libraries
library(tidyverse)
library(arrow)
library(skimr)
library(dplyr)
library(ggplot2)
library(lubridate)
library(caret)
library(randomForest)
library(patchwork)
library(pROC)
library(purrr)
#- Read train_events and modify timestamp with lubridate
#- Read events
<- read_csv("train_events.csv") %>%
events mutate(dt = as_datetime(timestamp)) %>%
mutate(dt = dt - hours(4)) %>% mutate(hr = hour(dt)) %>%
select(-timestamp)
head(events)
#- Events counts
%>% count(event)
events #- Sleep onset and wakeup timeline
# Merge onset and wakeup data on dates
<- events %>%
timeline_data filter(event %in% c("onset", "wakeup")) %>%
group_by(date = as.Date(dt)) %>%
mutate(time_minutes = hour(dt) * 60 + minute(dt))
# Create a scatter plot
<- ggplot(timeline_data, aes(x = date, y = time_minutes, color = event)) +
plot1 geom_point(shape = 3, alpha = 0.2) +
labs(title = "Sleep Onset and Wakeup Timeline",
x = "Date",
y = "Time of Day (minutes after midnight)") +
theme_minimal() +
theme(axis.text.x = element_text(angle = 45, hjust = 1))
# Calculate sleep durations and onset hour
<- events %>%
pivot_data group_by(series_id, night) %>%
summarize(duration_minutes = (max(step) - min(step)) / 60,
onset_hour = hour(min(dt)),.groups = 'drop')
# Calculate average sleep duration by hour of onset
<- pivot_data %>%
average_duration_by_hour group_by(onset_hour) %>%
summarize(avg_duration = mean(duration_minutes),.groups = 'drop')
# Create a bar plot
<- ggplot(average_duration_by_hour, aes(x = onset_hour, y = avg_duration, fill = avg_duration)) +
plot2 geom_bar(stat = "identity") +
labs(title = "Average Sleep Duration by Hour of Onset",
x = "Hour of Sleep Onset",
y = "Average Sleep Duration (minutes)") +
theme_minimal() +
scale_fill_gradient(low = "red", high = "green")
# Order days of the week
<- c("Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday")
ordered_days # Extract day of week and calculate sleep duration in minutes
# Calculate sleep duration in minutes and extract day of the week
<- events %>%
pivot_data group_by(series_id, night) %>%
summarize(duration_minutes = (max(step) - min(step)) / 60,
min_datetime = min(dt),.groups = 'drop') %>%
mutate(day_of_week = factor(format(min_datetime, "%A"), levels = ordered_days))
# Create a box plot
<- ggplot(pivot_data, aes(x = day_of_week, y = duration_minutes, fill = day_of_week)) +
plot3 geom_boxplot() +
labs(title = "Box Plot of Sleep Duration by Day of Week",
x = "Day of the Week",
y = "Sleep Duration (minutes)") +
theme_minimal() +
theme(axis.text.x = element_text(angle = 45, hjust = 1))
#- Distribution of wakeup and Distribution of onset
# Filter the data for "wakeup" and "onset" events
<- events %>% filter(event == "wakeup")
events_wakeup <- events %>% filter(event == "onset")
events_onset
# Combine the filtered data into a single data frame
<- rbind(events_wakeup, events_onset)
combined_events
# Create the plot with density lines for "wakeup" and "onset" events
<- ggplot(combined_events, aes(x = hr, color = event, fill = event)) +
plot4 geom_density(alpha = 0.5) +
labs(x = "Hour", y = "Density") +
scale_color_manual(values = c("wakeup" = "red", "onset" = "blue")) +
scale_fill_manual(values = c("wakeup" = "red", "onset" = "blue")) +
theme_minimal()
# Calculate the number of events per series
<- events %>%
events_per_series group_by(series_id) %>%
summarize(num_events = n())
# Create a histogram for the distribution of events per series
<- ggplot(events_per_series, aes(x = num_events)) +
plot5 geom_histogram(bins = 30, fill = "orange", color = "black", alpha = 0.7) +
labs(title = "Distribution of Number of Events per Series",
x = "Number of Events",
y = "Number of Series") +
theme_minimal()
# Most series exhibit approximately 48 events. However, there are a few series with a significantly lower number of valid events, which may be excluded from the study.
#- Detect NA in events
<- events %>% group_by(series_id,step) %>% filter(is.na(step))
events_na
events_na<- length(unique(events_na$series_id))
num_na <- unique(events_na$series_id)
na_id
num_na
na_id# Steps containing records with 'NA' (not available) were identified as not precise enough and subsequently removed. Our primary focus was on analyzing accelerometer data that is complete, without any missing values ('NA'). This approach ensures the integrity and reliability of our findings.
#- Series_id without NA events
<- unique(events$series_id)
all_id <- setdiff(all_id,na_id)
nna_id
nna_id# In this study, steps containing records with 'NA' (not available) were identified as not precise enough and subsequently removed. Our primary focus was on analyzing accelerometer data that is complete, without any missing values ('NA'). This approach ensures the integrity and reliability of our findings, as we are utilizing only the most accurate and comprehensive data sets available.
#- Remove two truncated event series
<- c("31011ade7c0a","a596ad0b82aa")
trunc <- setdiff(nna_id,trunc)
nna_id <- tibble(nna_id) %>% rename(series_id = nna_id)
df_nna <- plot1 + labs(caption = "Figure1")+
plot1 theme(plot.caption = element_text(hjust = 0.5))
<- plot3 + labs(caption = "Figure 2")+
plot3 theme(plot.caption = element_text(hjust = 0.5))
<- plot1 + plot3
motivate_plot plot_layout(nrow = 2)
motivate_plot#- Read training data
<- read_parquet('Zzzs_train.parquet')
train head(train)
<- right_join(train,df_nna)
nna_train <- right_join(events,df_nna)
nna_event <- left_join(nna_train,nna_event)
train_new <- right_join(nna_train,nna_event)
train_new_full head(train_new_full)
<- train_new_full %>%
training_data select(-timestamp,-event,-dt,-night)
head(training_data)
#- Read testing data
<- read_parquet("test_series.parquet")
test_old # Generate a few random data to enlarge the test_series, the original one only has 3 unique series_id, each series lasts for 2.5 hours.
set.seed(123)
<- 150
num_rows <- data.frame(
random_data series_id = rep("038441c925bb", num_rows),
step = 0:(num_rows-1),
timestamp = seq(from = ymd_hms("2023-12-13T22:30:00-0400"),
by = "5 sec", length.out = num_rows),
anglez = runif(num_rows, min = 0, max = 5), # Random values between 0 and 5
enmo = runif(num_rows, min = 0, max = 0.05) # Random values between 0 and 0.05
)
# Convert timestamps to character
$timestamp <- format(random_data$timestamp, format="%Y-%m-%dT%H:%M:%S-0400")
random_data
<- rbind(test_old, random_data)
test
<- data.frame(
random_data series_id = rep("038491c925aa", num_rows),
step = 0:(num_rows-1),
timestamp = seq(from = ymd_hms("2020-10-13T23:45:00-0300"),
by = "5 sec", length.out = num_rows),
anglez = runif(num_rows, min = 0, max = 5), # Random values between 0 and 5
enmo = runif(num_rows, min = 0, max = 0.05) # Random values between 0 and 0.05
)
# Convert timestamps to character
$timestamp <- format(random_data$timestamp, format="%Y-%m-%dT%H:%M:%S-0500")
random_data
<- rbind(test, random_data)
test
<- data.frame(
random_data series_id = rep("038491c925aa", num_rows),
step = 0:(num_rows-1),
timestamp = seq(from = ymd_hms("2021-02-13T02:36:00-0400"),
by = "5 sec", length.out = num_rows),
anglez = runif(num_rows, min = 0, max = 5), # Random values between 0 and 5
enmo = runif(num_rows, min = 0, max = 0.05) # Random values between 0 and 0.05
)
# Convert timestamps to character
$timestamp <- format(random_data$timestamp, format="%Y-%m-%dT%H:%M:%S-0500")
random_data
<- rbind(test, random_data)
test
<- test %>%
test mutate(dt = as_datetime(timestamp)) %>%
mutate(dt = dt - hours(4)) %>%
mutate(hr = hour(dt)) %>%
mutate(step = hr*60+step) %>%
select(-timestamp,-dt)
head(test)
set.seed(123)
<- createDataPartition(training_data$awake, p = 0.8, list = FALSE)
split <- training_data[split, ]
training_set $awake <- as.factor(training_set$awake)
training_set<- training_data[-split, ]
testing_set $awake <- as.factor(testing_set$awake)
testing_set<- "standardize" # or "normalize"
preprocessing_method <- preProcess(training_set, method = ifelse(preprocessing_method == "standardize", c("center", "scale"), c("range")))
preprocess_params saveRDS(preprocess_params, file = "preprocess_params.rds")
<- predict(preprocess_params, training_set)
preprocessed_training_set
# Initialize a data frame to store results
<- data.frame(ntree = integer(), mtry = integer(), OOBError = numeric())
results_df
# Define the range for mtry and ntree
<- seq(1, ncol(preprocessed_training_set) - 1, by=1) # Full range for predictors
mtry_range <- seq(100, 550, by=5) # range for ntree
ntree_range
# Loop over mtry and ntree values
for (mtry in mtry_range) {
for (ntree in ntree_range) {
set.seed(123)
<- randomForest(awake ~ ., data = preprocessed_training_set, mtry = mtry, ntree = ntree, do.trace=FALSE)
model
# Extract OOB error rate
<- model$err.rate[nrow(model$err.rate), "OOB"]
OOBError
# Store results
<- rbind(results_df, data.frame(ntree = ntree, mtry = mtry, OOBError = OOBError))
results_df
}
}
# Check if results_df is empty or has NA values
if (nrow(results_df) == 0 || any(is.na(results_df$OOBError))) {
stop("No data to plot. Check the random forest model training.")
}
# Plot accuracy vs. mtry (plot accuracy for different mtry at a fixed ntree)
# Calculate accuracy from the OOB error rate
$Accuracy <- 1 - results_df$OOBError
results_df<- ggplot(subset(results_df, ntree == 550), aes(x = mtry, y = Accuracy)) +
accuracy_plot geom_line() +
geom_point() +
labs(title = "Attempt1: max experimental ntree",
x = "#Randomly Selected Predictors",
y = "Accuracy (Cross-validation)")
print(accuracy_plot)
# Filter the results for mtry = 2 and mtry = 3
<- results_df[results_df$mtry %in% c(2, 3), ]
filtered_df
# Find local minima for each mtry group
<- filtered_df %>%
local_minima group_by(mtry) %>%
slice(which(diff(sign(diff(OOBError))) == 2) + 1) %>%
ungroup()
<- local_minima[5, ]
specific_minima
<- data.frame(
label_point ntree = 335,
OOBError = 0.01732673,
label = "ntree: 335\nError: 0.0173")
# Plot OOB error rates for mtry = 2 and mtry = 3
<- ggplot(filtered_df, aes(x = ntree, y = OOBError, color = as.factor(mtry), group = mtry)) +
oob_error_plot geom_smooth() +
geom_point(data = specific_minima, aes(x = ntree, y = OOBError), color = "blue", size = 5, shape = 21, fill = "blue") +
geom_text(data = label_point, aes(x = ntree, y = OOBError, label = label), nudge_y = 0.001, hjust = 0, vjust = 0, color = "black") +
xlab("Number of Trees") +
ylab("Out-of-Bag Error Rate") +
ggtitle("Error Rate Over mtry = 2 and 3") +
scale_color_manual(values = c("red", "yellow"), labels = c("mtry = 3", "mtry = 2")) +
theme_minimal()
print(oob_error_plot)
$Accuracy <- 1 - results_df$OOBError
results_df<- ggplot(subset(results_df, ntree == 335), aes(x = mtry, y = Accuracy)) +
accuracy_plot2 geom_line() +
geom_point() +
labs(title = "Attempt2: ntree=335",
x = "#Randomly Selected Predictors",
y = "Accuracy (Cross-validation)")
print(accuracy_plot2)
<- accuracy_plot+labs(caption = "Figure3.1")+theme(plot.caption = element_text(hjust = 0.5))
accuracy_plot <- oob_error_plot+labs(caption = "Figure4")+theme(plot.caption = element_text(hjust = 0.5))
oob_error_plot <- accuracy_plot2+labs(caption = "Figure3.2")+theme(plot.caption = element_text(hjust = 0.5))
accuracy_plot2 <- accuracy_plot+oob_error_plot+accuracy_plot2
hyperparam_plot plot_layout(nrow = 2)
hyperparam_plot$awake <- as.factor(preprocessed_training_set$awake)
preprocessed_training_set<- randomForest(awake ~ ., data = preprocessed_training_set, ntree = 335, mtry = 3)
model saveRDS(model, file = "ReducRFmodel.rds")
# Calculate training accuracy
<- model$confusion
confusion_matrix <- sum(diag(confusion_matrix)) / sum(confusion_matrix)
training_accuracy print(confusion_matrix)
print(paste("Training Accuracy:", training_accuracy))
<- model$err.rate[nrow(model$err.rate), "OOB"]
oob_error <- 1 - oob_error
training_oob_accuracy print(paste("OOB Error Rate:", oob_error))
print(paste("Training OOB Accuracy:", training_oob_accuracy))
<- readRDS(file = "preprocess_params.rds")
preprocess_params # Apply the same preprocessing to the test data
<- predict(preprocess_params, testing_set)
preprocessed_testing_set # Ensure that 'predict' function returns probabilities
<- predict(model, preprocessed_testing_set, type = "prob")
testing_set_probs # Extract probabilities for the positive class ( '1' is the positive class)
<- testing_set_probs[, "1"]
positive_class_probs # Calculate ROC object
<- roc(preprocessed_testing_set$awake, positive_class_probs)
roc_obj # Calculate the AUC
<- auc(roc_obj)
auc_value print(auc_value)
# Plot the ROC curve along with AUC
plot(roc_obj, main=paste("ROC Curve, AUC =", round(auc_value, 6)))
write.csv(testing_set, "testing_set.csv", row.names = FALSE)
write.csv(testing_set_probs, "testing_set_probs.csv", row.names = FALSE)
# Extracting predicted class labels with a threshold. I use 0.5 here
<- ifelse(testing_set_probs[, "1"] > 0.5, 1, 0)
predicted_labels
# Confusion Matrix
<- confusionMatrix(factor(predicted_labels), factor(preprocessed_testing_set$awake))
confusionMatrix
# Printing the Confusion Matrix
print(confusionMatrix)
# Precision, Recall, and F1 Score
<- posPredValue(factor(predicted_labels), factor(preprocessed_testing_set$awake))
precision <- sensitivity(factor(predicted_labels), factor(preprocessed_testing_set$awake))
recall <- 2 * ((precision * recall) / (precision + recall))
f1_score
# Printing the metrics
print(paste("Precision:", precision))
print(paste("Recall:", recall))
print(paste("F1 Score:", f1_score))
# Extract feature importance
<- importance(model)
importance colnames(importance)
<- data.frame(
feature_importance Feature = rownames(importance),
Importance = importance[, "MeanDecreaseGini"]
)# Plot using ggplot2
<- ggplot(feature_importance, aes(x = reorder(Feature, Importance), y = Importance)) +
rf_feature_importance_plot geom_bar(stat = "identity") +
coord_flip() + # Flips the axes for horizontal bars
xlab("Feature") +
ylab("MeanDecreaseGini") +
ggtitle("Feature Importance from Random Forest Model") +
theme_minimal()
<- readRDS(file = "preprocess_params.rds")
preprocess_params # Apply the same preprocessing to the test data
<- predict(preprocess_params, test)
preprocessed_test # Predict probabilities
# This returns a matrix with probabilities for each class
<- predict(model, newdata = preprocessed_test, type = "prob")
prob_predictions
# Determine the predicted class based on the higher probability
# And extract the corresponding confidence score
$predicted_event <- apply(prob_predictions, 1, function(x) names(x)[which.max(x)])
test$confidence_score <- apply(prob_predictions, 1, max)
test
# Prepare submission data frame and Write
<- test %>%
submission select(series_id, step, predicted_event, confidence_score)
write.csv(submission, "submission.csv", row.names = FALSE)
# Predict probabilities
<- predict(model, newdata = preprocessed_test, type = "prob")
prob_predictions # Add predicted probabilities to the test data
$onset_confidence <- prob_predictions[,"1"]
test$wakeup_confidence <- prob_predictions[,"0"]
test
# Determine the most likely onset and wakeup for each series_id
# For each series_id, find the step with the highest confidence for onset and wakeup
<- test %>%
final_selection group_by(series_id) %>%
summarize(
onset_step = step[which.max(onset_confidence)],
onset_score = max(onset_confidence),
wakeup_step = step[which.max(wakeup_confidence)],
wakeup_score = max(wakeup_confidence)
%>%
) ungroup()
# Reshape the data for submission
<- final_selection %>%
final_submission select(series_id, onset_step, onset_score, wakeup_step, wakeup_score) %>%
pivot_longer(
cols = c(onset_step, wakeup_step),
names_to = "event_type",
values_to = "step"
%>%
) mutate(
event = ifelse(event_type == "onset_step", "onset", "wakeup"),
score = ifelse(event_type == "onset_step", onset_score, wakeup_score)
%>%
) select(-event_type, -onset_score, -wakeup_score)
# Assign row_id
<- final_submission %>%
final_submission mutate(row_id = row_number() - 1) %>%
select(row_id, everything())
# Extract the mean values used for centering
<- preprocess_params$mean["step"]
step_mean
# Write submission file
write.csv(final_submission, "final_submission.csv", row.names = FALSE)
# # Filter function
# # Define feature columns used in the model
# feature_cols <- c("series_id", "step", "anglez", "enmo", "hr") # Include 'step' and other features
# # Loop over each series ID
# unique_series_ids<-unique(preprocessed_test$series_id)
# for (series_id in unique_series_ids) {
# series_data <- preprocessed_test %>% filter(series_id == series_id)
#
# # Predict events
# preds <- predict(model, newdata = series_data[feature_cols])
#
# # Detect sleep onsets and wakeups
# pred_changes <- c(FALSE, diff(preds) != 0)
# pred_onsets <- series_data$step[preds == 1 & pred_changes]
# pred_wakeups <- series_data$step[preds == 0 & pred_changes]
#
# # Filter and score events
# valid_periods <- which(pred_wakeups - pred_onsets >= 12 * 30) # Adjust threshold as needed
# if (length(valid_periods) > 0) {
# for (i in valid_periods) {
# onset_step <- pred_onsets[i]
# wakeup_step <- pred_wakeups[i]
# score <- mean(series_data$onset_confidence[onset_step:wakeup_step], na.rm = TRUE)
#
# # Add to final submission
# final_submission <- rbind(final_submission, data.frame(
# series_id = series_id,
# onset_step = onset_step,
# wakeup_step = wakeup_step,
# score = score
# ))
# }
# }
# }
final_submissionlibrary(reticulate)
use_condaenv("env", required = TRUE)
"""Event Detection Average Precision
An average precision metric for event detection in time series and
video.
"""
import numpy as np
import pandas as pd
import pandas.api.types
from typing import Dict, List, Tuple
ParticipantVisibleError(Exception):
class
pass
# Set some placeholders for global parameters
= None
series_id_column_name = None
time_column_name = None
event_column_name = None
score_column_name = None
use_scoring_intervals
score(
def : pd.DataFrame,
solution: pd.DataFrame,
submission: Dict[str, List[float]],
tolerances: str,
series_id_column_name: str,
time_column_name: str,
event_column_name: str,
score_column_name: bool = False,
use_scoring_intervals-> float:
) """Event Detection Average Precision, an AUCPR metric for event detection in
time series and video.
This metric is similar to IOU-threshold average precision metrics commonly
used in object detection. For events occuring in time series, we replace the
IOU threshold with a time tolerance.
Submissions are evaluated on the average precision of detected events,
averaged over timestamp error tolerance thresholds, averaged over event
classes.
Detections are matched to ground-truth events within error tolerances, with
ambiguities resolved in order of decreasing confidence.
Detailed Description
--------------------
Evaluation proceeds in four steps:
1. Selection - (optional) Predictions not within a series' scoring
intervals are dropped.
2. Assignment - Predicted events are matched with ground-truth events.
3. Scoring - Each group of predictions is scored against its corresponding
group of ground-truth events via Average Precision.
4. Reduction - The multiple AP scores are averaged to produce a single
overall score.
Selection
With each series there may be a defined set of scoring intervals giving the
intervals of time over which zero or more ground-truth events might be
annotated in that series. A prediction will be evaluated only if it falls
within a scoring interval. These scoring intervals can be chosen to improve
the fairness of evaluation by, for instance, ignoring edge-cases or
ambiguous events.
It is recommended that, if used, scoring intervals be provided for training
data but not test data.
Assignment
For each set of predictions and ground-truths within the same `event x
tolerance x series_id` group, we match each ground-truth to the
highest-confidence unmatched prediction occurring within the allowed
tolerance.
Some ground-truths may not be matched to a prediction and some predictions
may not be matched to a ground-truth. They will still be accounted for in
the scoring, however.
Scoring
Collecting the events within each `series_id`, we compute an Average
Precision score for each `event x tolerance` group. The average precision
score is the area under the (step-wise) precision-recall curve generated by
decreasing confidence score thresholds over the predictions. In this
calculation, matched predictions over the threshold are scored as TP and
unmatched predictions as FP. Unmatched ground-truths are scored as FN.
Reduction
The final score is the average of the above AP scores, first averaged over
tolerance, then over event.
Parameters
----------
solution : pd.DataFrame, with columns:
`series_id_column_name` identifier for each time series
`time_column_name` the time of occurence for each event as a numeric type
`event_column_name` class label for each event
The solution contains the time of occurence of one or more types of
event within one or more time series. The metric expects the solution to
contain the same event types as those given in `tolerances`.
When `use_scoring_intervals == True`, you may include `start` and `end`
events to delimit intervals within which detections will be scored.
Detected events (from the user submission) outside of these events will
be ignored.
submission : pd.DataFrame, with columns as above and in addition:
`score_column_name` the predicted confidence score for the detected event
tolerances : Dict[str, List[float]]
Maps each event class to a list of timestamp tolerances used
for matching detections to ground-truth events.
use_scoring_intervals: bool, default False
Whether to ignore predicted events outside intervals delimited
by `'start'` and `'end'` events in the solution. When `False`,
the solution should not include `'start'` and `'end'` events.
See the examples for illustration.
Returns
-------
event_detection_ap : float
The mean average precision of the detected events.
Examples
--------
Detecting `'pass'` events in football:
>>> column_names = {
... 'series_id_column_name': 'video_id',
... 'time_column_name': 'time',
... 'event_column_name': 'event',
... 'score_column_name': 'score',
... }
>>> tolerances = {'pass': [1.0]}
>>> solution = pd.DataFrame({
... 'video_id': ['a', 'a'],
... 'event': ['pass', 'pass'],
... 'time': [0, 15],
... })
>>> submission = pd.DataFrame({
... 'video_id': ['a', 'a', 'a'],
... 'event': ['pass', 'pass', 'pass'],
... 'score': [1.0, 0.5, 1.0],
... 'time': [0, 10, 14.5],
... })
>>> score(solution, submission, tolerances, **column_names)
1.0
Increasing the confidence score of the false detection above the true
detections decreases the AP.
>>> submission.loc[1, 'score'] = 1.5
>>> score(solution, submission, tolerances, **column_names)
0.6666666666666666...
Likewise, decreasing the confidence score of a true detection below the
false detection also decreases the AP.
>>> submission.loc[1, 'score'] = 0.5 # reset
>>> submission.loc[0, 'score'] = 0.0
>>> score(solution, submission, tolerances, **column_names)
0.8333333333333333...
We average AP scores over tolerances. Previously, the detection at 14.5
would match, but adding smaller tolerances gives AP scores where it does
not match. This results in both a FN, since the ground-truth wasn't
detected, and a FP, since the detected event matches no ground-truth.
>>> tolerances = {'pass': [0.1, 0.2, 1.0]}
>>> score(solution, submission, tolerances, **column_names)
0.3888888888888888...
We also average over time series and over event classes.
>>> tolerances = {'pass': [0.5, 1.0], 'challenge': [0.25, 0.50]}
>>> solution = pd.DataFrame({
... 'video_id': ['a', 'a', 'b'],
... 'event': ['pass', 'challenge', 'pass'],
... 'time': [0, 15, 0], # restart time for new time series b
... })
>>> submission = pd.DataFrame({
... 'video_id': ['a', 'a', 'b'],
... 'event': ['pass', 'challenge', 'pass'],
... 'score': [1.0, 0.5, 1.0],
... 'time': [0, 15, 0],
... })
>>> score(solution, submission, tolerances, **column_names)
1.0
By adding scoring intervals to the solution, we may choose to ignore
detections outside of those intervals.
>>> tolerances = {'pass': [1.0]}
>>> solution = pd.DataFrame({
... 'video_id': ['a', 'a', 'a', 'a'],
... 'event': ['start', 'pass', 'pass', 'end'],
... 'time': [0, 10, 20, 30],
... })
>>> submission = pd.DataFrame({
... 'video_id': ['a', 'a', 'a'],
... 'event': ['pass', 'pass', 'pass'],
... 'score': [1.0, 1.0, 1.0],
... 'time': [10, 20, 40],
... })
>>> score(solution, submission, tolerances, **column_names, use_scoring_intervals=True)
1.0
"""
# Validate metric parameters
len(tolerances) > 0, "Events must have defined tolerances."
assert set(tolerances.keys()) == set(solution[event_column_name]).difference({'start', 'end'}),\
assert "Solution column {event_column_name} must contain the same events "
(f"as defined in tolerances.")
pd.api.types.is_numeric_dtype(solution[time_column_name]),\
assert "Solution column {time_column_name} must be of numeric type."
f
# Validate submission format
for column_name in [
series_id_column_name,
time_column_name,
event_column_name,
score_column_name,:
]if column_name not in submission.columns:
ParticipantVisibleError(f"Submission must have column '{column_name}'.")
raise
if not pd.api.types.is_numeric_dtype(submission[time_column_name]):
ParticipantVisibleError(
raise "Submission column '{time_column_name}' must be of numeric type."
f
)if not pd.api.types.is_numeric_dtype(submission[score_column_name]):
ParticipantVisibleError(
raise "Submission column '{score_column_name}' must be of numeric type."
f
)
# Set these globally to avoid passing around a bunch of arguments
globals()['series_id_column_name'] = series_id_column_name
globals()['time_column_name'] = time_column_name
globals()['event_column_name'] = event_column_name
globals()['score_column_name'] = score_column_name
globals()['use_scoring_intervals'] = use_scoring_intervals
event_detection_ap(solution, submission, tolerances)
return
filter_detections(
def : pd.DataFrame, intervals: pd.DataFrame
detections-> pd.DataFrame:
) """Drop detections not inside a scoring interval."""
= detections.loc[:, time_column_name].sort_values().to_numpy()
detection_time = intervals.to_numpy()
intervals = np.full_like(detection_time, False, dtype=bool)
is_scored
= 0, 0
i, j while i < len(detection_time) and j < len(intervals):
= detection_time[i]
time = intervals[j]
int_
# If the detection is prior in time to the interval, go to the next detection.
if time < int_.left:
+= 1
i # If the detection is inside the interval, keep it and go to the next detection.
in int_:
elif time = True
is_scored[i] += 1
i # If the detection is later in time, go to the next interval.
else:
+= 1
j
.reset_index(drop=True)
return detections.loc[is_scored]
match_detections(
def : float, ground_truths: pd.DataFrame, detections: pd.DataFrame
tolerance-> pd.DataFrame:
) """Match detections to ground truth events. Arguments are taken from a common event x tolerance x series_id evaluation group."""
= detections.sort_values(score_column_name, ascending=False).dropna()
detections_sorted = np.full_like(detections_sorted[event_column_name], False, dtype=bool)
is_matched = set()
gts_matched for i, det in enumerate(detections_sorted.itertuples(index=False)):
= tolerance
best_error = None
best_gt
for gt in ground_truths.itertuples(index=False):
= abs(getattr(det, time_column_name) - getattr(gt, time_column_name))
error if error < best_error and gt not in gts_matched:
= gt
best_gt = error
best_error
if best_gt is not None:
= True
is_matched[i] gts_matched.add(best_gt)
'matched'] = is_matched
detections_sorted[
return detections_sorted
precision_recall_curve(
def : np.ndarray, scores: np.ndarray, p: int
matches-> Tuple[np.ndarray, np.ndarray, np.ndarray]:
) if len(matches) == 0:
1], [0], []
return [
# Sort matches by decreasing confidence
= np.argsort(scores, kind='stable')[::-1]
idxs = scores[idxs]
scores = matches[idxs]
matches
= np.where(np.diff(scores))[0]
distinct_value_indices = np.r_[distinct_value_indices, matches.size - 1]
threshold_idxs = scores[threshold_idxs]
thresholds
# Matches become TPs and non-matches FPs as confidence threshold decreases
= np.cumsum(matches)[threshold_idxs]
tps = np.cumsum(~matches)[threshold_idxs]
fps
= tps / (tps + fps)
precision np.isnan(precision)] = 0
precision[= tps / p # total number of ground truths might be different than total number of matches
recall
# Stop when full recall attained and reverse the outputs so recall is non-increasing.
= tps.searchsorted(tps[-1])
last_ind = slice(last_ind, None, -1)
sl
# Final precision is 1 and final recall is 0
1], np.r_[recall[sl], 0], thresholds[sl]
return np.r_[precision[sl],
average_precision_score(matches: np.ndarray, scores: np.ndarray, p: int) -> float:
def = precision_recall_curve(matches, scores, p)
precision, recall, _ # Compute step integral
-np.sum(np.diff(recall) * np.array(precision)[:-1])
return
event_detection_ap(
def : pd.DataFrame,
solution: pd.DataFrame,
submission: Dict[str, List[float]],
tolerances-> float:
)
# Ensure solution and submission are sorted properly
= solution.sort_values([series_id_column_name, time_column_name])
solution = submission.sort_values([series_id_column_name, time_column_name])
submission
# Extract scoring intervals.
if use_scoring_intervals:
= (
intervals
solution.query("event in ['start', 'end']")
.assign(interval=lambda x: x.groupby([series_id_column_name, event_column_name]).cumcount())
.pivot(
index='interval',
columns=[series_id_column_name, event_column_name],
values=time_column_name,
).stack(series_id_column_name)
.swaplevel()
.sort_index()
:, ['start', 'end']]
.loc[.apply(lambda x: pd.Interval(*x, closed='both'), axis=1)
)
# Extract ground-truth events.
= (
ground_truths
solution.query("event not in ['start', 'end']")
.reset_index(drop=True)
)
# Map each event class to its prevalence (needed for recall calculation)
= ground_truths.value_counts(event_column_name).to_dict()
class_counts
# Create table for detections with a column indicating a match to a ground-truth event
= submission.assign(matched = False)
detections
# Remove detections outside of scoring intervals
if use_scoring_intervals:
= []
detections_filtered for (det_group, dets), (int_group, ints) in zip(
detections.groupby(series_id_column_name), intervals.groupby(series_id_column_name)
:
)== int_group
assert det_group detections_filtered.append(filter_detections(dets, ints))
= pd.concat(detections_filtered, ignore_index=True)
detections_filtered else:
= detections
detections_filtered
# Create table of event-class x tolerance x series_id values
= pd.DataFrame(
aggregation_keys
[(ev, tol, vid)for ev in tolerances.keys()
for tol in tolerances[ev]
for vid in ground_truths[series_id_column_name].unique()],
columns=[event_column_name, 'tolerance', series_id_column_name],
)
# Create match evaluation groups: event-class x tolerance x series_id
= (
detections_grouped
aggregation_keys.merge(detections_filtered, on=[event_column_name, series_id_column_name], how='left')
.groupby([event_column_name, 'tolerance', series_id_column_name])
)= (
ground_truths_grouped
aggregation_keys.merge(ground_truths, on=[event_column_name, series_id_column_name], how='left')
.groupby([event_column_name, 'tolerance', series_id_column_name])
)# Match detections to ground truth events by evaluation group
= []
detections_matched for key in aggregation_keys.itertuples(index=False):
= detections_grouped.get_group(key)
dets = ground_truths_grouped.get_group(key)
gts detections_matched.append(
match_detections(dets['tolerance'].iloc[0], gts, dets)
)= pd.concat(detections_matched)
detections_matched
# Compute AP per event x tolerance group
= ground_truths[event_column_name].unique()
event_classes = (
ap_table
detections_matched.query("event in @event_classes")
.groupby([event_column_name, 'tolerance']).apply(
: average_precision_score(
lambda group'matched'].to_numpy(),
group[.to_numpy(),
group[score_column_name]0]],
class_counts[group[event_column_name].iat[
)
)
)# Average over tolerances, then over event classes
= ap_table.groupby(event_column_name).mean().sum() / len(event_classes)
mean_ap
return mean_ap
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import polars as pl
import datetime
from tqdm import tqdm
import plotly.express as px
from plotly.subplots import make_subplots
import plotly.graph_objects as go
= {
tolerances "onset" : [12, 36, 60, 90, 120, 150, 180, 240, 300, 360],
'wakeup': [12, 36, 60, 90, 120, 150, 180, 240, 300, 360]
}
= {
column_names 'series_id_column_name': 'series_id',
'time_column_name': 'step',
'event_column_name': 'event',
'score_column_name': 'score',
}
#import data
= [
dt_transforms pl.col('timestamp').str.to_datetime(),
pl.col('timestamp').str.to_datetime().dt.year()-2000).cast(pl.UInt8).alias('year'),
(pl.col('timestamp').str.to_datetime().dt.month().cast(pl.UInt8).alias('month'),
pl.col('timestamp').str.to_datetime().dt.day().cast(pl.UInt8).alias('day'),
pl.col('timestamp').str.to_datetime().dt.hour().cast(pl.UInt8).alias('hour')
]
= [
data_transforms pl.col('anglez').cast(pl.Int16), # Casting anglez to 16 bit integer
pl.col('enmo')*1000).cast(pl.UInt16), # Convert enmo to 16 bit uint
(
]
= pl.scan_parquet('train_series.parquet').with_columns(
train_series + data_transforms
dt_transforms
)
= pl.read_csv('train_events.csv').with_columns(
train_events
dt_transforms
)
= pl.scan_parquet('test_series.parquet').with_columns(
test_series + data_transforms
dt_transforms
)
# Getting series ids as a list for convenience
= train_events['series_id'].unique(maintain_order=True).to_list()
series_ids
# Removing series with mismatched counts:
= train_events.filter(pl.col('event')=='onset').group_by('series_id').count().sort('series_id')['count']
onset_counts = train_events.filter(pl.col('event')=='wakeup').group_by('series_id').count().sort('series_id')['count']
wakeup_counts
= pl.DataFrame({'series_id':sorted(series_ids), 'onset_counts':onset_counts, 'wakeup_counts':wakeup_counts})
counts = counts.filter(counts['onset_counts'] != counts['wakeup_counts'])
count_mismatches
= train_series.filter(~pl.col('series_id').is_in(count_mismatches['series_id']))
train_series = train_events.filter(~pl.col('series_id').is_in(count_mismatches['series_id']))
train_events
# Updating list of series ids, not including series with no non-null values.
= train_events.drop_nulls()['series_id'].unique(maintain_order=True).to_list()
series_ids
# Feature Engineering start from here
= [pl.col('hour')], ['hour']
features, feature_cols
for mins in [5, 30, 60*2, 60*8] :
+= [
features pl.col('enmo').rolling_mean(12 * mins, center=True, min_periods=1).abs().cast(pl.UInt16).alias(f'enmo_{mins}m_mean'),
pl.col('enmo').rolling_max(12 * mins, center=True, min_periods=1).abs().cast(pl.UInt16).alias(f'enmo_{mins}m_max')
]
+= [
feature_cols 'enmo_{mins}m_mean', f'enmo_{mins}m_max'
f
]
# Getting first variations
for var in ['enmo', 'anglez'] :
+= [
features pl.col(var).diff().abs().rolling_mean(12 * mins, center=True, min_periods=1)*10).abs().cast(pl.UInt32).alias(f'{var}_1v_{mins}m_mean'),
(pl.col(var).diff().abs().rolling_max(12 * mins, center=True, min_periods=1)*10).abs().cast(pl.UInt32).alias(f'{var}_1v_{mins}m_max')
(
]
+= [
feature_cols '{var}_1v_{mins}m_mean', f'{var}_1v_{mins}m_max'
f
]
= ['series_id', 'step', 'timestamp']
id_cols
= train_series.with_columns(
train_series
features.select(id_cols + feature_cols)
)
= test_series.with_columns(
test_series
features.select(id_cols + feature_cols)
)
# train dataset preparation method
make_train_dataset(train_data, train_events, drop_nulls=False) :
def
= train_data['series_id'].unique(maintain_order=True).to_list()
series_ids = pl.DataFrame(), pl.DataFrame()
X, y for idx in tqdm(series_ids) :
# Normalizing sample features
= train_data.filter(pl.col('series_id')==idx).with_columns(
sample pl.col(col) / pl.col(col).std()).cast(pl.Float32) for col in feature_cols if col != 'hour']
[(
)
= train_events.filter(pl.col('series_id')==idx)
events
if drop_nulls :
# Removing datapoints on dates where no data was recorded
= sample.filter(
sample pl.col('timestamp').dt.date().is_in(events['timestamp'].dt.date())
)
= X.vstack(sample[id_cols + feature_cols])
X
= events.filter((pl.col('event') == 'onset') & (pl.col('step') != None))['step'].to_list()
onsets = events.filter((pl.col('event') == 'wakeup') & (pl.col('step') != None))['step'].to_list()
wakeups
# NOTE: This will break if there are event series without any recorded onsets or wakeups
= y.vstack(sample.with_columns(
y sum([(onset <= pl.col('step')) & (pl.col('step') <= wakeup) for onset, wakeup in zip(onsets, wakeups)]).cast(pl.Boolean).alias('asleep')
.select('asleep')
)
)
= y.to_numpy().ravel()
y
return X, y
# apply classifier to get event method
get_events(series, classifier) :
def '''
Takes a time series and a classifier and returns a formatted submission dataframe.
'''
= series['series_id'].unique(maintain_order=True).to_list()
series_ids = pl.DataFrame(schema={'series_id':str, 'step':int, 'event':str, 'score':float})
events
for idx in tqdm(series_ids) :
# Collecting sample and normalizing features
= [col for col in feature_cols if (col != 'hour') & (series[col].std() !=0)]
scale_cols = series.filter(pl.col('series_id') == idx).select(id_cols + feature_cols).with_columns(
X pl.col(col) / series[col].std()).cast(pl.Float32) for col in scale_cols]
[(
)
# Applying classifier to get predictions and scores
= classifier.predict(X[feature_cols]), classifier.predict_proba(X[feature_cols])[:, 1]
preds, probs
#NOTE: Considered using rolling max to get sleep periods excluding <30 min interruptions, but ended up decreasing performance
= X.with_columns(
X pl.lit(preds).cast(pl.Int8).alias('prediction'),
pl.lit(probs).alias('probability')
)
# Getting predicted onset and wakeup time steps
= X.filter(X['prediction'].diff() > 0)['step'].to_list()
pred_onsets = X.filter(X['prediction'].diff() < 0)['step'].to_list()
pred_wakeups
if len(pred_onsets) > 0 :
# Ensuring all predicted sleep periods begin and end
if min(pred_wakeups) < min(pred_onsets) :
= pred_wakeups[1:]
pred_wakeups
if max(pred_onsets) > max(pred_wakeups) :
= pred_onsets[:-1]
pred_onsets
# Keeping sleep periods longer than 30 minutes
= [(onset, wakeup) for onset, wakeup in zip(pred_onsets, pred_wakeups) if wakeup - onset >= 12 * 30]
sleep_periods
for onset, wakeup in sleep_periods :
# Scoring using mean probability over period
= X.filter((pl.col('step') >= onset) & (pl.col('step') <= wakeup))['probability'].mean()
score
# Adding sleep event to dataframe
= events.vstack(pl.DataFrame().with_columns(
events pl.Series([idx, idx]).alias('series_id'),
pl.Series([onset, wakeup]).alias('step'),
pl.Series(['onset', 'wakeup']).alias('event'),
pl.Series([score, score]).alias('score')
))
# Adding row id column
= events.to_pandas().reset_index().rename(columns={'index':'row_id'})
events
return events# extract from R processed testing_set and testing_pred_prob, then use ap score in python to calculate
import pandas as pd= pd.read_csv("testing_set.csv")
testing_set = pd.read_csv("testing_set_probs.csv")
testing_set_probs = 'series_id'
series_id_column_name = 'step'
time_column_name = 'awake'
event_column_name = 'score'
score_column_name
# Create the solution DataFrame
= testing_set[[series_id_column_name, time_column_name, event_column_name]]
solution
# Convert predicted probabilities to class labels using a threshold of 0.5
# The probabilities for class "1" are in the second column of testing_set_probs
= (testing_set_probs.iloc[:, 1] > 0.5).astype(int)
predicted_labels
# Create the submission DataFrame
= testing_set[[series_id_column_name, time_column_name, event_column_name]]
submission 'predicted_label'] = predicted_labels # Add predicted labels
submission['score'] = testing_set_probs.iloc[:, 1] # Add the probabilities as confidence scores
submission[
# Handling scoring intervals if use_scoring_intervals is True
= False # Set to False if not using scoring intervals
use_scoring_intervals if use_scoring_intervals:
# Example: Assuming 'start_event' and 'end_event' columns in testing_set
# These columns should represent the intervals for scoring
'start_event'] = testing_set['start_event']
solution['end_event'] = testing_set['end_event']
solution['start_event'] = testing_set['start_event']
submission['end_event'] = testing_set['end_event']
submission[
= solution.rename(columns={'awake': 'event'})
solution = submission.rename(columns={'awake': 'event'})
submission 'event'] = solution['event'].map({0: 'onset', 1: 'wakeup'})
solution['event'] = submission['event'].map({0: 'onset', 1: 'wakeup'})
submission[solution.to_csv('testing_set_solution.csv',index=False)
submission.to_csv('testing_set_submission.csv',index=False)
= score(solution, submission, tolerances, **column_names)
rf_ap_score + labs(caption = "Figure5")+
plot2theme(plot.caption = element_text(hjust = 0.5))
+ labs(caption = "Figure6")+
plot4theme(plot.caption = element_text(hjust = 0.5))
plot(roc_obj, main=paste("ROC Curve, AUC =", round(auc_value, 6)))
mtext("Figure7", side = 1, line = 4.15, cex = 0.8)
# Printing the Confusion Matrix
print(confusionMatrix)
# Printing the metrics
print(paste("Precision:", precision))
print(paste("Recall:", recall))
print(paste("F1 Score:", f1_score))
+ labs(caption = "Figure8")+
rf_feature_importance_plottheme(plot.caption = element_text(hjust = 0.5))
final_submission### An Optional Dive into GGIR Package
<- read.csv("train_events.csv")
train_events <- arrow::read_parquet("Zzzs_train.parquet")
train_series <- arrow::read_parquet("test_series.parquet")
test_series write.csv(train_series, "Zzzs_train.csv", row.names = FALSE)
write.csv(test_series,'test_series.csv', row.names = FALSE)
library(GGIR)
#g.shell.GGIR
#- Load libraries
library(tidyverse)
library(arrow)
library(skimr)
library(dplyr)
library(ggplot2)
library(lubridate)
library(caret)
library(randomForest)
library(patchwork)
library(pROC)
library(purrr)#- Read train_events and modify timestamp with lubridate
#- Read events
<- read_csv("train_events.csv") %>%
events = as_datetime(timestamp)) %>%
mutate(dt = dt - hours(4)) %>% mutate(hr = hour(dt)) %>%
mutate(dt -timestamp)
select(
head(events)#- Events counts
%>% count(event)
events #- Sleep onset and wakeup timeline
# Merge onset and wakeup data on dates
<- events %>%
timeline_data filter(event %in% c("onset", "wakeup")) %>%
= as.Date(dt)) %>%
group_by(date = hour(dt) * 60 + minute(dt))
mutate(time_minutes
# Create a scatter plot
<- ggplot(timeline_data, aes(x = date, y = time_minutes, color = event)) +
plot1 = 3, alpha = 0.2) +
geom_point(shape = "Sleep Onset and Wakeup Timeline",
labs(title = "Date",
x = "Time of Day (minutes after midnight)") +
y +
theme_minimal() = element_text(angle = 45, hjust = 1))
theme(axis.text.x
# Calculate sleep durations and onset hour
<- events %>%
pivot_data %>%
group_by(series_id, night) = (max(step) - min(step)) / 60,
summarize(duration_minutes = hour(min(dt)),.groups = 'drop')
onset_hour
# Calculate average sleep duration by hour of onset
<- pivot_data %>%
average_duration_by_hour %>%
group_by(onset_hour) = mean(duration_minutes),.groups = 'drop')
summarize(avg_duration
# Create a bar plot
<- ggplot(average_duration_by_hour, aes(x = onset_hour, y = avg_duration, fill = avg_duration)) +
plot2 = "identity") +
geom_bar(stat = "Average Sleep Duration by Hour of Onset",
labs(title = "Hour of Sleep Onset",
x = "Average Sleep Duration (minutes)") +
y +
theme_minimal() = "red", high = "green")
scale_fill_gradient(low # Order days of the week
<- c("Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday")
ordered_days # Extract day of week and calculate sleep duration in minutes
# Calculate sleep duration in minutes and extract day of the week
<- events %>%
pivot_data %>%
group_by(series_id, night) = (max(step) - min(step)) / 60,
summarize(duration_minutes = min(dt),.groups = 'drop') %>%
min_datetime = factor(format(min_datetime, "%A"), levels = ordered_days))
mutate(day_of_week
# Create a box plot
<- ggplot(pivot_data, aes(x = day_of_week, y = duration_minutes, fill = day_of_week)) +
plot3 +
geom_boxplot() = "Box Plot of Sleep Duration by Day of Week",
labs(title = "Day of the Week",
x = "Sleep Duration (minutes)") +
y +
theme_minimal() = element_text(angle = 45, hjust = 1))
theme(axis.text.x #- Distribution of wakeup and Distribution of onset
# Filter the data for "wakeup" and "onset" events
<- events %>% filter(event == "wakeup")
events_wakeup <- events %>% filter(event == "onset")
events_onset
# Combine the filtered data into a single data frame
<- rbind(events_wakeup, events_onset)
combined_events
# Create the plot with density lines for "wakeup" and "onset" events
<- ggplot(combined_events, aes(x = hr, color = event, fill = event)) +
plot4 = 0.5) +
geom_density(alpha = "Hour", y = "Density") +
labs(x = c("wakeup" = "red", "onset" = "blue")) +
scale_color_manual(values = c("wakeup" = "red", "onset" = "blue")) +
scale_fill_manual(values
theme_minimal()
# Calculate the number of events per series
<- events %>%
events_per_series %>%
group_by(series_id) = n())
summarize(num_events
# Create a histogram for the distribution of events per series
<- ggplot(events_per_series, aes(x = num_events)) +
plot5 = 30, fill = "orange", color = "black", alpha = 0.7) +
geom_histogram(bins = "Distribution of Number of Events per Series",
labs(title = "Number of Events",
x = "Number of Series") +
y
theme_minimal()# Most series exhibit approximately 48 events. However, there are a few series with a significantly lower number of valid events, which may be excluded from the study.
#- Detect NA in events
<- events %>% group_by(series_id,step) %>% filter(is.na(step))
events_na
events_na<- length(unique(events_na$series_id))
num_na <- unique(events_na$series_id)
na_id
num_na
na_id# Steps containing records with 'NA' (not available) were identified as not precise enough and subsequently removed. Our primary focus was on analyzing accelerometer data that is complete, without any missing values ('NA'). This approach ensures the integrity and reliability of our findings.
#- Series_id without NA events
<- unique(events$series_id)
all_id <- setdiff(all_id,na_id)
nna_id
nna_id# In this study, steps containing records with 'NA' (not available) were identified as not precise enough and subsequently removed. Our primary focus was on analyzing accelerometer data that is complete, without any missing values ('NA'). This approach ensures the integrity and reliability of our findings, as we are utilizing only the most accurate and comprehensive data sets available.
#- Remove two truncated event series
<- c("31011ade7c0a","a596ad0b82aa")
trunc <- setdiff(nna_id,trunc)
nna_id <- tibble(nna_id) %>% rename(series_id = nna_id)
df_nna <- plot1 + labs(caption = "Figure1")+
plot1 = element_text(hjust = 0.5))
theme(plot.caption <- plot3 + labs(caption = "Figure 2")+
plot3 = element_text(hjust = 0.5))
theme(plot.caption
<- plot1 + plot3
motivate_plot = 2)
plot_layout(nrow
motivate_plot#- Read training data
<- read_parquet('Zzzs_train.parquet')
train
head(train)<- right_join(train,df_nna)
nna_train <- right_join(events,df_nna)
nna_event <- left_join(nna_train,nna_event)
train_new <- right_join(nna_train,nna_event)
train_new_full
head(train_new_full)<- train_new_full %>%
training_data -timestamp,-event,-dt,-night)
select(
head(training_data)#- Read testing data
<- read_parquet("test_series.parquet")
test_old # Generate a few random data to enlarge the test_series, the original one only has 3 unique series_id, each series lasts for 2.5 hours.
set.seed(123)
<- 150
num_rows <- data.frame(
random_data = rep("038441c925bb", num_rows),
series_id = 0:(num_rows-1),
step = seq(from = ymd_hms("2023-12-13T22:30:00-0400"),
timestamp = "5 sec", length.out = num_rows),
by = runif(num_rows, min = 0, max = 5), # Random values between 0 and 5
anglez = runif(num_rows, min = 0, max = 0.05) # Random values between 0 and 0.05
enmo
)
# Convert timestamps to character
<- format(random_data$timestamp, format="%Y-%m-%dT%H:%M:%S-0400")
random_data$timestamp
<- rbind(test_old, random_data)
test
<- data.frame(
random_data = rep("038491c925aa", num_rows),
series_id = 0:(num_rows-1),
step = seq(from = ymd_hms("2020-10-13T23:45:00-0300"),
timestamp = "5 sec", length.out = num_rows),
by = runif(num_rows, min = 0, max = 5), # Random values between 0 and 5
anglez = runif(num_rows, min = 0, max = 0.05) # Random values between 0 and 0.05
enmo
)
# Convert timestamps to character
<- format(random_data$timestamp, format="%Y-%m-%dT%H:%M:%S-0500")
random_data$timestamp
<- rbind(test, random_data)
test
<- data.frame(
random_data = rep("038491c925aa", num_rows),
series_id = 0:(num_rows-1),
step = seq(from = ymd_hms("2021-02-13T02:36:00-0400"),
timestamp = "5 sec", length.out = num_rows),
by = runif(num_rows, min = 0, max = 5), # Random values between 0 and 5
anglez = runif(num_rows, min = 0, max = 0.05) # Random values between 0 and 0.05
enmo
)
# Convert timestamps to character
<- format(random_data$timestamp, format="%Y-%m-%dT%H:%M:%S-0500")
random_data$timestamp
<- rbind(test, random_data)
test
<- test %>%
test = as_datetime(timestamp)) %>%
mutate(dt = dt - hours(4)) %>%
mutate(dt = hour(dt)) %>%
mutate(hr = hr*60+step) %>%
mutate(step -timestamp,-dt)
select(
head(test)set.seed(123)
<- createDataPartition(training_data$awake, p = 0.8, list = FALSE)
split <- training_data[split, ]
training_set <- as.factor(training_set$awake)
training_set$awake <- training_data[-split, ]
testing_set <- as.factor(testing_set$awake)
testing_set$awake <- "standardize" # or "normalize"
preprocessing_method <- preProcess(training_set, method = ifelse(preprocessing_method == "standardize", c("center", "scale"), c("range")))
preprocess_params file = "preprocess_params.rds")
saveRDS(preprocess_params, <- predict(preprocess_params, training_set)
preprocessed_training_set
# Initialize a data frame to store results
<- data.frame(ntree = integer(), mtry = integer(), OOBError = numeric())
results_df
# Define the range for mtry and ntree
<- seq(1, ncol(preprocessed_training_set) - 1, by=1) # Full range for predictors
mtry_range <- seq(100, 550, by=5) # range for ntree
ntree_range
# Loop over mtry and ntree values
for (mtry in mtry_range) {
for (ntree in ntree_range) {
set.seed(123)
<- randomForest(awake ~ ., data = preprocessed_training_set, mtry = mtry, ntree = ntree, do.trace=FALSE)
model
# Extract OOB error rate
<- model$err.rate[nrow(model$err.rate), "OOB"]
OOBError
# Store results
<- rbind(results_df, data.frame(ntree = ntree, mtry = mtry, OOBError = OOBError))
results_df
}
}
# Check if results_df is empty or has NA values
if (nrow(results_df) == 0 || any(is.na(results_df$OOBError))) {
"No data to plot. Check the random forest model training.")
stop(
}
# Plot accuracy vs. mtry (plot accuracy for different mtry at a fixed ntree)
# Calculate accuracy from the OOB error rate
<- 1 - results_df$OOBError
results_df$Accuracy <- ggplot(subset(results_df, ntree == 550), aes(x = mtry, y = Accuracy)) +
accuracy_plot +
geom_line() +
geom_point() = "Attempt1: max experimental ntree",
labs(title = "#Randomly Selected Predictors",
x = "Accuracy (Cross-validation)")
y print(accuracy_plot)
# Filter the results for mtry = 2 and mtry = 3
<- results_df[results_df$mtry %in% c(2, 3), ]
filtered_df
# Find local minima for each mtry group
<- filtered_df %>%
local_minima %>%
group_by(mtry) slice(which(diff(sign(diff(OOBError))) == 2) + 1) %>%
ungroup()<- local_minima[5, ]
specific_minima
<- data.frame(
label_point = 335,
ntree = 0.01732673,
OOBError = "ntree: 335\nError: 0.0173")
label
# Plot OOB error rates for mtry = 2 and mtry = 3
<- ggplot(filtered_df, aes(x = ntree, y = OOBError, color = as.factor(mtry), group = mtry)) +
oob_error_plot +
geom_smooth() = specific_minima, aes(x = ntree, y = OOBError), color = "blue", size = 5, shape = 21, fill = "blue") +
geom_point(data = label_point, aes(x = ntree, y = OOBError, label = label), nudge_y = 0.001, hjust = 0, vjust = 0, color = "black") +
geom_text(data "Number of Trees") +
xlab("Out-of-Bag Error Rate") +
ylab("Error Rate Over mtry = 2 and 3") +
ggtitle(= c("red", "yellow"), labels = c("mtry = 3", "mtry = 2")) +
scale_color_manual(values
theme_minimal()
print(oob_error_plot)
<- 1 - results_df$OOBError
results_df$Accuracy <- ggplot(subset(results_df, ntree == 335), aes(x = mtry, y = Accuracy)) +
accuracy_plot2 +
geom_line() +
geom_point() = "Attempt2: ntree=335",
labs(title = "#Randomly Selected Predictors",
x = "Accuracy (Cross-validation)")
y print(accuracy_plot2)
<- accuracy_plot+labs(caption = "Figure3.1")+theme(plot.caption = element_text(hjust = 0.5))
accuracy_plot <- oob_error_plot+labs(caption = "Figure4")+theme(plot.caption = element_text(hjust = 0.5))
oob_error_plot <- accuracy_plot2+labs(caption = "Figure3.2")+theme(plot.caption = element_text(hjust = 0.5))
accuracy_plot2 <- accuracy_plot+oob_error_plot+accuracy_plot2
hyperparam_plot = 2)
plot_layout(nrow
hyperparam_plot<- as.factor(preprocessed_training_set$awake)
preprocessed_training_set$awake <- randomForest(awake ~ ., data = preprocessed_training_set, ntree = 335, mtry = 3)
model file = "ReducRFmodel.rds")
saveRDS(model, # Calculate training accuracy
<- model$confusion
confusion_matrix <- sum(diag(confusion_matrix)) / sum(confusion_matrix)
training_accuracy print(confusion_matrix)
print(paste("Training Accuracy:", training_accuracy))
<- model$err.rate[nrow(model$err.rate), "OOB"]
oob_error <- 1 - oob_error
training_oob_accuracy print(paste("OOB Error Rate:", oob_error))
print(paste("Training OOB Accuracy:", training_oob_accuracy))
<- readRDS(file = "preprocess_params.rds")
preprocess_params # Apply the same preprocessing to the test data
<- predict(preprocess_params, testing_set)
preprocessed_testing_set # Ensure that 'predict' function returns probabilities
<- predict(model, preprocessed_testing_set, type = "prob")
testing_set_probs # Extract probabilities for the positive class ( '1' is the positive class)
<- testing_set_probs[, "1"]
positive_class_probs # Calculate ROC object
<- roc(preprocessed_testing_set$awake, positive_class_probs)
roc_obj # Calculate the AUC
<- auc(roc_obj)
auc_value print(auc_value)
# Plot the ROC curve along with AUC
=paste("ROC Curve, AUC =", round(auc_value, 6)))
plot(roc_obj, main"testing_set.csv", row.names = FALSE)
write.csv(testing_set, "testing_set_probs.csv", row.names = FALSE)
write.csv(testing_set_probs,
# Extracting predicted class labels with a threshold. I use 0.5 here
<- ifelse(testing_set_probs[, "1"] > 0.5, 1, 0)
predicted_labels
# Confusion Matrix
<- confusionMatrix(factor(predicted_labels), factor(preprocessed_testing_set$awake))
confusionMatrix
# Printing the Confusion Matrix
print(confusionMatrix)
# Precision, Recall, and F1 Score
<- posPredValue(factor(predicted_labels), factor(preprocessed_testing_set$awake))
precision <- sensitivity(factor(predicted_labels), factor(preprocessed_testing_set$awake))
recall <- 2 * ((precision * recall) / (precision + recall))
f1_score
# Printing the metrics
print(paste("Precision:", precision))
print(paste("Recall:", recall))
print(paste("F1 Score:", f1_score))
# Extract feature importance
<- importance(model)
importance
colnames(importance)<- data.frame(
feature_importance = rownames(importance),
Feature = importance[, "MeanDecreaseGini"]
Importance
)# Plot using ggplot2
<- ggplot(feature_importance, aes(x = reorder(Feature, Importance), y = Importance)) +
rf_feature_importance_plot = "identity") +
geom_bar(stat + # Flips the axes for horizontal bars
coord_flip() "Feature") +
xlab("MeanDecreaseGini") +
ylab("Feature Importance from Random Forest Model") +
ggtitle(
theme_minimal()<- readRDS(file = "preprocess_params.rds")
preprocess_params # Apply the same preprocessing to the test data
<- predict(preprocess_params, test)
preprocessed_test # Predict probabilities
# This returns a matrix with probabilities for each class
<- predict(model, newdata = preprocessed_test, type = "prob")
prob_predictions
# Determine the predicted class based on the higher probability
# And extract the corresponding confidence score
<- apply(prob_predictions, 1, function(x) names(x)[which.max(x)])
test$predicted_event <- apply(prob_predictions, 1, max)
test$confidence_score
# Prepare submission data frame and Write
<- test %>%
submission
select(series_id, step, predicted_event, confidence_score)"submission.csv", row.names = FALSE)
write.csv(submission, # Predict probabilities
<- predict(model, newdata = preprocessed_test, type = "prob")
prob_predictions # Add predicted probabilities to the test data
<- prob_predictions[,"1"]
test$onset_confidence <- prob_predictions[,"0"]
test$wakeup_confidence
# Determine the most likely onset and wakeup for each series_id
# For each series_id, find the step with the highest confidence for onset and wakeup
<- test %>%
final_selection %>%
group_by(series_id)
summarize(= step[which.max(onset_confidence)],
onset_step = max(onset_confidence),
onset_score = step[which.max(wakeup_confidence)],
wakeup_step = max(wakeup_confidence)
wakeup_score %>%
)
ungroup()
# Reshape the data for submission
<- final_selection %>%
final_submission %>%
select(series_id, onset_step, onset_score, wakeup_step, wakeup_score)
pivot_longer(= c(onset_step, wakeup_step),
cols = "event_type",
names_to = "step"
values_to %>%
)
mutate(= ifelse(event_type == "onset_step", "onset", "wakeup"),
event = ifelse(event_type == "onset_step", onset_score, wakeup_score)
score %>%
) -event_type, -onset_score, -wakeup_score)
select(
# Assign row_id
<- final_submission %>%
final_submission = row_number() - 1) %>%
mutate(row_id
select(row_id, everything())
# Extract the mean values used for centering
<- preprocess_params$mean["step"]
step_mean
# Write submission file
"final_submission.csv", row.names = FALSE)
write.csv(final_submission, # # Filter function
# # Define feature columns used in the model
# feature_cols <- c("series_id", "step", "anglez", "enmo", "hr") # Include 'step' and other features
# # Loop over each series ID
# unique_series_ids<-unique(preprocessed_test$series_id)
# for (series_id in unique_series_ids) {
# series_data <- preprocessed_test %>% filter(series_id == series_id)
#
# # Predict events
# preds <- predict(model, newdata = series_data[feature_cols])
#
# # Detect sleep onsets and wakeups
# pred_changes <- c(FALSE, diff(preds) != 0)
# pred_onsets <- series_data$step[preds == 1 & pred_changes]
# pred_wakeups <- series_data$step[preds == 0 & pred_changes]
#
# # Filter and score events
# valid_periods <- which(pred_wakeups - pred_onsets >= 12 * 30) # Adjust threshold as needed
# if (length(valid_periods) > 0) {
# for (i in valid_periods) {
# onset_step <- pred_onsets[i]
# wakeup_step <- pred_wakeups[i]
# score <- mean(series_data$onset_confidence[onset_step:wakeup_step], na.rm = TRUE)
#
# # Add to final submission
# final_submission <- rbind(final_submission, data.frame(
# series_id = series_id,
# onset_step = onset_step,
# wakeup_step = wakeup_step,
# score = score
# ))
# }
# }
# }
final_submission
library(reticulate)"env", required = TRUE)
use_condaenv("""Event Detection Average Precision
An average precision metric for event detection in time series and
video.
"""
import numpy as np
import pandas as pd
import pandas.api.types
from typing import Dict, List, Tuple
class ParticipantVisibleError(Exception):
pass
# Set some placeholders for global parameters
= None
series_id_column_name = None
time_column_name = None
event_column_name = None
score_column_name = None
use_scoring_intervals
def score(
solution: pd.DataFrame,
submission: pd.DataFrame,str, List[float]],
tolerances: Dict[str,
series_id_column_name: str,
time_column_name: str,
event_column_name: str,
score_column_name: bool = False,
use_scoring_intervals: -> float:
) """Event Detection Average Precision, an AUCPR metric for event detection in
time series and video.
This metric is similar to IOU-threshold average precision metrics commonly
used in object detection. For events occuring in time series, we replace the
IOU threshold with a time tolerance.
Submissions are evaluated on the average precision of detected events,
averaged over timestamp error tolerance thresholds, averaged over event
classes.
Detections are matched to ground-truth events within error tolerances, with
ambiguities resolved in order of decreasing confidence.
Detailed Description
--------------------
Evaluation proceeds in four steps:
1. Selection - (optional) Predictions not within a series' scoring
intervals are dropped.
2. Assignment - Predicted events are matched with ground-truth events.
3. Scoring - Each group of predictions is scored against its corresponding
group of ground-truth events via Average Precision.
4. Reduction - The multiple AP scores are averaged to produce a single
overall score.
Selection
With each series there may be a defined set of scoring intervals giving the
intervals of time over which zero or more ground-truth events might be
annotated in that series. A prediction will be evaluated only if it falls
within a scoring interval. These scoring intervals can be chosen to improve
the fairness of evaluation by, for instance, ignoring edge-cases or
ambiguous events.
It is recommended that, if used, scoring intervals be provided for training
data but not test data.
Assignment
For each set of predictions and ground-truths within the same `event x
tolerance x series_id` group, we match each ground-truth to the
highest-confidence unmatched prediction occurring within the allowed
tolerance.
Some ground-truths may not be matched to a prediction and some predictions
may not be matched to a ground-truth. They will still be accounted for in
the scoring, however.
Scoring
Collecting the events within each `series_id`, we compute an Average
Precision score for each `event x tolerance` group. The average precision
score is the area under the (step-wise) precision-recall curve generated by
decreasing confidence score thresholds over the predictions. In this
calculation, matched predictions over the threshold are scored as TP and
unmatched predictions as FP. Unmatched ground-truths are scored as FN.
Reduction
The final score is the average of the above AP scores, first averaged over
tolerance, then over event.
Parameters
----------
solution : pd.DataFrame, with columns:
`series_id_column_name` identifier for each time series
`time_column_name` the time of occurence for each event as a numeric type
`event_column_name` class label for each event
The solution contains the time of occurence of one or more types of
event within one or more time series. The metric expects the solution to
contain the same event types as those given in `tolerances`.
When `use_scoring_intervals == True`, you may include `start` and `end`
events to delimit intervals within which detections will be scored.
Detected events (from the user submission) outside of these events will
be ignored.
submission : pd.DataFrame, with columns as above and in addition:
`score_column_name` the predicted confidence score for the detected event
tolerances : Dict[str, List[float]]
Maps each event class to a list of timestamp tolerances used
for matching detections to ground-truth events.
use_scoring_intervals: bool, default False
Whether to ignore predicted events outside intervals delimited
by `'start'` and `'end'` events in the solution. When `False`,
the solution should not include `'start'` and `'end'` events.
See the examples for illustration.
Returns
-------
event_detection_ap : float
The mean average precision of the detected events.
Examples
--------
Detecting `'pass'` events in football:
>>> column_names = {
... 'series_id_column_name': 'video_id',
... 'time_column_name': 'time',
... 'event_column_name': 'event',
... 'score_column_name': 'score',
... }
>>> tolerances = {'pass': [1.0]}
>>> solution = pd.DataFrame({
... 'video_id': ['a', 'a'],
... 'event': ['pass', 'pass'],
... 'time': [0, 15],
... })
>>> submission = pd.DataFrame({
... 'video_id': ['a', 'a', 'a'],
... 'event': ['pass', 'pass', 'pass'],
... 'score': [1.0, 0.5, 1.0],
... 'time': [0, 10, 14.5],
... })
>>> score(solution, submission, tolerances, **column_names)
1.0
Increasing the confidence score of the false detection above the true
detections decreases the AP.
>>> submission.loc[1, 'score'] = 1.5
>>> score(solution, submission, tolerances, **column_names)
0.6666666666666666...
Likewise, decreasing the confidence score of a true detection below the
false detection also decreases the AP.
>>> submission.loc[1, 'score'] = 0.5 # reset
>>> submission.loc[0, 'score'] = 0.0
>>> score(solution, submission, tolerances, **column_names)
0.8333333333333333...
We average AP scores over tolerances. Previously, the detection at 14.5
would match, but adding smaller tolerances gives AP scores where it does
not match. This results in both a FN, since the ground-truth wasn't
detected, and a FP, since the detected event matches no ground-truth.
>>> tolerances = {'pass': [0.1, 0.2, 1.0]}
>>> score(solution, submission, tolerances, **column_names)
0.3888888888888888...
We also average over time series and over event classes.
>>> tolerances = {'pass': [0.5, 1.0], 'challenge': [0.25, 0.50]}
>>> solution = pd.DataFrame({
... 'video_id': ['a', 'a', 'b'],
... 'event': ['pass', 'challenge', 'pass'],
... 'time': [0, 15, 0], # restart time for new time series b
... })
>>> submission = pd.DataFrame({
... 'video_id': ['a', 'a', 'b'],
... 'event': ['pass', 'challenge', 'pass'],
... 'score': [1.0, 0.5, 1.0],
... 'time': [0, 15, 0],
... })
>>> score(solution, submission, tolerances, **column_names)
1.0
By adding scoring intervals to the solution, we may choose to ignore
detections outside of those intervals.
>>> tolerances = {'pass': [1.0]}
>>> solution = pd.DataFrame({
... 'video_id': ['a', 'a', 'a', 'a'],
... 'event': ['start', 'pass', 'pass', 'end'],
... 'time': [0, 10, 20, 30],
... })
>>> submission = pd.DataFrame({
... 'video_id': ['a', 'a', 'a'],
... 'event': ['pass', 'pass', 'pass'],
... 'score': [1.0, 1.0, 1.0],
... 'time': [10, 20, 40],
... })
>>> score(solution, submission, tolerances, **column_names, use_scoring_intervals=True)
1.0
"""
# Validate metric parameters
assert len(tolerances) > 0, "Events must have defined tolerances."
assert set(tolerances.keys()) == set(solution[event_column_name]).difference({'start', 'end'}),\
f"Solution column {event_column_name} must contain the same events "
("as defined in tolerances.")
assert pd.api.types.is_numeric_dtype(solution[time_column_name]),\
f"Solution column {time_column_name} must be of numeric type."
# Validate submission format
for column_name in [
series_id_column_name,
time_column_name,
event_column_name,
score_column_name,
]:if column_name not in submission.columns:
raise ParticipantVisibleError(f"Submission must have column '{column_name}'.")
if not pd.api.types.is_numeric_dtype(submission[time_column_name]):
raise ParticipantVisibleError(
f"Submission column '{time_column_name}' must be of numeric type."
)if not pd.api.types.is_numeric_dtype(submission[score_column_name]):
raise ParticipantVisibleError(
f"Submission column '{score_column_name}' must be of numeric type."
)
# Set these globally to avoid passing around a bunch of arguments
globals()['series_id_column_name'] = series_id_column_name
globals()['time_column_name'] = time_column_name
globals()['event_column_name'] = event_column_name
globals()['score_column_name'] = score_column_name
globals()['use_scoring_intervals'] = use_scoring_intervals
return event_detection_ap(solution, submission, tolerances)
def filter_detections(
detections: pd.DataFrame, intervals: pd.DataFrame-> pd.DataFrame:
) """Drop detections not inside a scoring interval."""
= detections.loc[:, time_column_name].sort_values().to_numpy()
detection_time = intervals.to_numpy()
intervals = np.full_like(detection_time, False, dtype=bool)
is_scored
= 0, 0
i, j while i < len(detection_time) and j < len(intervals):
= detection_time[i]
time = intervals[j]
int_
# If the detection is prior in time to the interval, go to the next detection.
if time < int_.left:
+= 1
i # If the detection is inside the interval, keep it and go to the next detection.
elif time in int_:
= True
is_scored[i] += 1
i # If the detection is later in time, go to the next interval.
else:
+= 1
j
return detections.loc[is_scored].reset_index(drop=True)
def match_detections(
float, ground_truths: pd.DataFrame, detections: pd.DataFrame
tolerance: -> pd.DataFrame:
) """Match detections to ground truth events. Arguments are taken from a common event x tolerance x series_id evaluation group."""
= detections.sort_values(score_column_name, ascending=False).dropna()
detections_sorted = np.full_like(detections_sorted[event_column_name], False, dtype=bool)
is_matched = set()
gts_matched for i, det in enumerate(detections_sorted.itertuples(index=False)):
= tolerance
best_error = None
best_gt
for gt in ground_truths.itertuples(index=False):
= abs(getattr(det, time_column_name) - getattr(gt, time_column_name))
error if error < best_error and gt not in gts_matched:
= gt
best_gt = error
best_error
if best_gt is not None:
= True
is_matched[i]
gts_matched.add(best_gt)
'matched'] = is_matched
detections_sorted[
return detections_sorted
def precision_recall_curve(
int
matches: np.ndarray, scores: np.ndarray, p: -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
) if len(matches) == 0:
return [1], [0], []
# Sort matches by decreasing confidence
= np.argsort(scores, kind='stable')[::-1]
idxs = scores[idxs]
scores = matches[idxs]
matches
= np.where(np.diff(scores))[0]
distinct_value_indices = np.r_[distinct_value_indices, matches.size - 1]
threshold_idxs = scores[threshold_idxs]
thresholds
# Matches become TPs and non-matches FPs as confidence threshold decreases
= np.cumsum(matches)[threshold_idxs]
tps = np.cumsum(~matches)[threshold_idxs]
fps
= tps / (tps + fps)
precision = 0
precision[np.isnan(precision)] = tps / p # total number of ground truths might be different than total number of matches
recall
# Stop when full recall attained and reverse the outputs so recall is non-increasing.
= tps.searchsorted(tps[-1])
last_ind = slice(last_ind, None, -1)
sl
# Final precision is 1 and final recall is 0
return np.r_[precision[sl], 1], np.r_[recall[sl], 0], thresholds[sl]
def average_precision_score(matches: np.ndarray, scores: np.ndarray, p: int) -> float:
= precision_recall_curve(matches, scores, p)
precision, recall, _ # Compute step integral
return -np.sum(np.diff(recall) * np.array(precision)[:-1])
def event_detection_ap(
solution: pd.DataFrame,
submission: pd.DataFrame,str, List[float]],
tolerances: Dict[-> float:
)
# Ensure solution and submission are sorted properly
= solution.sort_values([series_id_column_name, time_column_name])
solution = submission.sort_values([series_id_column_name, time_column_name])
submission
# Extract scoring intervals.
if use_scoring_intervals:
= (
intervals
solution"event in ['start', 'end']")
.query(=lambda x: x.groupby([series_id_column_name, event_column_name]).cumcount())
.assign(interval
.pivot(='interval',
index=[series_id_column_name, event_column_name],
columns=time_column_name,
values
)
.stack(series_id_column_name)
.swaplevel()
.sort_index()'start', 'end']]
.loc[:, [apply(lambda x: pd.Interval(*x, closed='both'), axis=1)
.
)
# Extract ground-truth events.
= (
ground_truths
solution"event not in ['start', 'end']")
.query(=True)
.reset_index(drop
)
# Map each event class to its prevalence (needed for recall calculation)
= ground_truths.value_counts(event_column_name).to_dict()
class_counts
# Create table for detections with a column indicating a match to a ground-truth event
= submission.assign(matched = False)
detections
# Remove detections outside of scoring intervals
if use_scoring_intervals:
= []
detections_filtered for (det_group, dets), (int_group, ints) in zip(
detections.groupby(series_id_column_name), intervals.groupby(series_id_column_name)
):assert det_group == int_group
detections_filtered.append(filter_detections(dets, ints))= pd.concat(detections_filtered, ignore_index=True)
detections_filtered else:
= detections
detections_filtered
# Create table of event-class x tolerance x series_id values
= pd.DataFrame(
aggregation_keys
[(ev, tol, vid)for ev in tolerances.keys()
for tol in tolerances[ev]
for vid in ground_truths[series_id_column_name].unique()],
=[event_column_name, 'tolerance', series_id_column_name],
columns
)
# Create match evaluation groups: event-class x tolerance x series_id
= (
detections_grouped
aggregation_keys=[event_column_name, series_id_column_name], how='left')
.merge(detections_filtered, on'tolerance', series_id_column_name])
.groupby([event_column_name,
)= (
ground_truths_grouped
aggregation_keys=[event_column_name, series_id_column_name], how='left')
.merge(ground_truths, on'tolerance', series_id_column_name])
.groupby([event_column_name,
)# Match detections to ground truth events by evaluation group
= []
detections_matched for key in aggregation_keys.itertuples(index=False):
= detections_grouped.get_group(key)
dets = ground_truths_grouped.get_group(key)
gts
detections_matched.append('tolerance'].iloc[0], gts, dets)
match_detections(dets[
)= pd.concat(detections_matched)
detections_matched
# Compute AP per event x tolerance group
= ground_truths[event_column_name].unique()
event_classes = (
ap_table
detections_matched"event in @event_classes")
.query('tolerance']).apply(
.groupby([event_column_name, lambda group: average_precision_score(
'matched'].to_numpy(),
group[
group[score_column_name].to_numpy(),0]],
class_counts[group[event_column_name].iat[
)
)
)# Average over tolerances, then over event classes
= ap_table.groupby(event_column_name).mean().sum() / len(event_classes)
mean_ap
return mean_ap
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import polars as pl
import datetime
from tqdm import tqdm
import plotly.express as px
from plotly.subplots import make_subplots
import plotly.graph_objects as go
= {
tolerances "onset" : [12, 36, 60, 90, 120, 150, 180, 240, 300, 360],
'wakeup': [12, 36, 60, 90, 120, 150, 180, 240, 300, 360]
}
= {
column_names 'series_id_column_name': 'series_id',
'time_column_name': 'step',
'event_column_name': 'event',
'score_column_name': 'score',
}
#import data
= [
dt_transforms 'timestamp').str.to_datetime(),
pl.col('timestamp').str.to_datetime().dt.year()-2000).cast(pl.UInt8).alias('year'),
(pl.col('timestamp').str.to_datetime().dt.month().cast(pl.UInt8).alias('month'),
pl.col('timestamp').str.to_datetime().dt.day().cast(pl.UInt8).alias('day'),
pl.col('timestamp').str.to_datetime().dt.hour().cast(pl.UInt8).alias('hour')
pl.col(
]
= [
data_transforms 'anglez').cast(pl.Int16), # Casting anglez to 16 bit integer
pl.col('enmo')*1000).cast(pl.UInt16), # Convert enmo to 16 bit uint
(pl.col(
]
= pl.scan_parquet('train_series.parquet').with_columns(
train_series + data_transforms
dt_transforms
)
= pl.read_csv('train_events.csv').with_columns(
train_events
dt_transforms
)
= pl.scan_parquet('test_series.parquet').with_columns(
test_series + data_transforms
dt_transforms
)
# Getting series ids as a list for convenience
= train_events['series_id'].unique(maintain_order=True).to_list()
series_ids
# Removing series with mismatched counts:
= train_events.filter(pl.col('event')=='onset').group_by('series_id').count().sort('series_id')['count']
onset_counts = train_events.filter(pl.col('event')=='wakeup').group_by('series_id').count().sort('series_id')['count']
wakeup_counts
= pl.DataFrame({'series_id':sorted(series_ids), 'onset_counts':onset_counts, 'wakeup_counts':wakeup_counts})
counts = counts.filter(counts['onset_counts'] != counts['wakeup_counts'])
count_mismatches
= train_series.filter(~pl.col('series_id').is_in(count_mismatches['series_id']))
train_series = train_events.filter(~pl.col('series_id').is_in(count_mismatches['series_id']))
train_events
# Updating list of series ids, not including series with no non-null values.
= train_events.drop_nulls()['series_id'].unique(maintain_order=True).to_list()
series_ids
# Feature Engineering start from here
= [pl.col('hour')], ['hour']
features, feature_cols
for mins in [5, 30, 60*2, 60*8] :
+= [
features 'enmo').rolling_mean(12 * mins, center=True, min_periods=1).abs().cast(pl.UInt16).alias(f'enmo_{mins}m_mean'),
pl.col('enmo').rolling_max(12 * mins, center=True, min_periods=1).abs().cast(pl.UInt16).alias(f'enmo_{mins}m_max')
pl.col(
]
+= [
feature_cols f'enmo_{mins}m_mean', f'enmo_{mins}m_max'
]
# Getting first variations
for var in ['enmo', 'anglez'] :
+= [
features abs().rolling_mean(12 * mins, center=True, min_periods=1)*10).abs().cast(pl.UInt32).alias(f'{var}_1v_{mins}m_mean'),
(pl.col(var).diff().abs().rolling_max(12 * mins, center=True, min_periods=1)*10).abs().cast(pl.UInt32).alias(f'{var}_1v_{mins}m_max')
(pl.col(var).diff().
]
+= [
feature_cols f'{var}_1v_{mins}m_mean', f'{var}_1v_{mins}m_max'
]
= ['series_id', 'step', 'timestamp']
id_cols
= train_series.with_columns(
train_series
features+ feature_cols)
).select(id_cols
= test_series.with_columns(
test_series
features+ feature_cols)
).select(id_cols
# train dataset preparation method
def make_train_dataset(train_data, train_events, drop_nulls=False) :
= train_data['series_id'].unique(maintain_order=True).to_list()
series_ids = pl.DataFrame(), pl.DataFrame()
X, y for idx in tqdm(series_ids) :
# Normalizing sample features
= train_data.filter(pl.col('series_id')==idx).with_columns(
sample / pl.col(col).std()).cast(pl.Float32) for col in feature_cols if col != 'hour']
[(pl.col(col)
)
= train_events.filter(pl.col('series_id')==idx)
events
if drop_nulls :
# Removing datapoints on dates where no data was recorded
= sample.filter(
sample 'timestamp').dt.date().is_in(events['timestamp'].dt.date())
pl.col(
)
= X.vstack(sample[id_cols + feature_cols])
X
= events.filter((pl.col('event') == 'onset') & (pl.col('step') != None))['step'].to_list()
onsets = events.filter((pl.col('event') == 'wakeup') & (pl.col('step') != None))['step'].to_list()
wakeups
# NOTE: This will break if there are event series without any recorded onsets or wakeups
= y.vstack(sample.with_columns(
y sum([(onset <= pl.col('step')) & (pl.col('step') <= wakeup) for onset, wakeup in zip(onsets, wakeups)]).cast(pl.Boolean).alias('asleep')
'asleep')
).select(
)
= y.to_numpy().ravel()
y
return X, y
# apply classifier to get event method
def get_events(series, classifier) :
'''
Takes a time series and a classifier and returns a formatted submission dataframe.
'''
= series['series_id'].unique(maintain_order=True).to_list()
series_ids = pl.DataFrame(schema={'series_id':str, 'step':int, 'event':str, 'score':float})
events
for idx in tqdm(series_ids) :
# Collecting sample and normalizing features
= [col for col in feature_cols if (col != 'hour') & (series[col].std() !=0)]
scale_cols = series.filter(pl.col('series_id') == idx).select(id_cols + feature_cols).with_columns(
X / series[col].std()).cast(pl.Float32) for col in scale_cols]
[(pl.col(col)
)
# Applying classifier to get predictions and scores
= classifier.predict(X[feature_cols]), classifier.predict_proba(X[feature_cols])[:, 1]
preds, probs
#NOTE: Considered using rolling max to get sleep periods excluding <30 min interruptions, but ended up decreasing performance
= X.with_columns(
X 'prediction'),
pl.lit(preds).cast(pl.Int8).alias('probability')
pl.lit(probs).alias(
)
# Getting predicted onset and wakeup time steps
= X.filter(X['prediction'].diff() > 0)['step'].to_list()
pred_onsets = X.filter(X['prediction'].diff() < 0)['step'].to_list()
pred_wakeups
if len(pred_onsets) > 0 :
# Ensuring all predicted sleep periods begin and end
if min(pred_wakeups) < min(pred_onsets) :
= pred_wakeups[1:]
pred_wakeups
if max(pred_onsets) > max(pred_wakeups) :
= pred_onsets[:-1]
pred_onsets
# Keeping sleep periods longer than 30 minutes
= [(onset, wakeup) for onset, wakeup in zip(pred_onsets, pred_wakeups) if wakeup - onset >= 12 * 30]
sleep_periods
for onset, wakeup in sleep_periods :
# Scoring using mean probability over period
= X.filter((pl.col('step') >= onset) & (pl.col('step') <= wakeup))['probability'].mean()
score
# Adding sleep event to dataframe
= events.vstack(pl.DataFrame().with_columns(
events 'series_id'),
pl.Series([idx, idx]).alias('step'),
pl.Series([onset, wakeup]).alias('onset', 'wakeup']).alias('event'),
pl.Series(['score')
pl.Series([score, score]).alias(
))
# Adding row id column
= events.to_pandas().reset_index().rename(columns={'index':'row_id'})
events
return events
# extract from R processed testing_set and testing_pred_prob, then use ap score in python to calculate
import pandas as pd
= pd.read_csv("testing_set.csv")
testing_set = pd.read_csv("testing_set_probs.csv")
testing_set_probs = 'series_id'
series_id_column_name = 'step'
time_column_name = 'awake'
event_column_name = 'score'
score_column_name
# Create the solution DataFrame
= testing_set[[series_id_column_name, time_column_name, event_column_name]]
solution
# Convert predicted probabilities to class labels using a threshold of 0.5
# The probabilities for class "1" are in the second column of testing_set_probs
= (testing_set_probs.iloc[:, 1] > 0.5).astype(int)
predicted_labels
# Create the submission DataFrame
= testing_set[[series_id_column_name, time_column_name, event_column_name]]
submission 'predicted_label'] = predicted_labels # Add predicted labels
submission['score'] = testing_set_probs.iloc[:, 1] # Add the probabilities as confidence scores
submission[
# Handling scoring intervals if use_scoring_intervals is True
= False # Set to False if not using scoring intervals
use_scoring_intervals if use_scoring_intervals:
# Example: Assuming 'start_event' and 'end_event' columns in testing_set
# These columns should represent the intervals for scoring
'start_event'] = testing_set['start_event']
solution['end_event'] = testing_set['end_event']
solution['start_event'] = testing_set['start_event']
submission['end_event'] = testing_set['end_event']
submission[
= solution.rename(columns={'awake': 'event'})
solution = submission.rename(columns={'awake': 'event'})
submission 'event'] = solution['event'].map({0: 'onset', 1: 'wakeup'})
solution['event'] = submission['event'].map({0: 'onset', 1: 'wakeup'})
submission['testing_set_solution.csv',index=False)
solution.to_csv('testing_set_submission.csv',index=False)
submission.to_csv(= score(solution, submission, tolerances, **column_names)
rf_ap_score + labs(caption = "Figure5")+
plot2= element_text(hjust = 0.5))
theme(plot.caption + labs(caption = "Figure6")+
plot4= element_text(hjust = 0.5))
theme(plot.caption =paste("ROC Curve, AUC =", round(auc_value, 6)))
plot(roc_obj, main"Figure7", side = 1, line = 4.15, cex = 0.8)
mtext(# Printing the Confusion Matrix
print(confusionMatrix)
# Printing the metrics
print(paste("Precision:", precision))
print(paste("Recall:", recall))
print(paste("F1 Score:", f1_score))
+ labs(caption = "Figure8")+
rf_feature_importance_plot= element_text(hjust = 0.5))
theme(plot.caption
final_submission### An Optional Dive into GGIR Package
<- read.csv("train_events.csv")
train_events <- arrow::read_parquet("Zzzs_train.parquet")
train_series <- arrow::read_parquet("test_series.parquet")
test_series "Zzzs_train.csv", row.names = FALSE)
write.csv(train_series, 'test_series.csv', row.names = FALSE)
write.csv(test_series,
library(GGIR)#g.shell.GGIR