Input data and analyzed data of "Topology of synaptic connectivity constrains neuronal stimulus representation (...)" Article Swipe
YOU?
·
· 2020
· Open Access
·
· DOI: https://doi.org/10.5281/zenodo.4290211
· OA: W4393634439
This dataset contains the input data, as well as the analyzed data that our preprint <em><strong>Topology of synaptic connectivity constrains neuronal stimulus representation, predicting two complementary coding strategies</strong></em> to be found on bioRxiv is based on. The input data (<em>input_data.zip</em>) contains everything that is needed to run the full analysis pipeline from start to the generation of the figures found in the manuscript. However, some of the analysis steps can be computationally heavy, so we also provide the output of these expensive steps, that can be simply used in conjunction with jupyter notebooks (<em>notebooks.zip)</em> to generate the figures. <strong>Overview</strong> An overview image can be found <strong>here</strong> Blue squares denote input / output files (that are part of this dataset). Grey circles denote steps of the analysis pipeline (that are implemented in the github repository). Red rectangles denote configuration files (that are part of this dataset and also in the github repository). This Dataset can also be browsed, downloaded and accessed as linked open data from the BBP knowledge Graph based Data studios. <strong>Contained file types and their structure</strong> Here, we provide four types of files. Configuration files specify analysis parameters and define the expected locations of the data files. Input files are the inputs into the analysis pipeline. Analyzed files are the outputs of said pipeline. Finally, we provide a number of jupyter notebooks that use the analyzed files to generate the manuscript figures. If you want to re-run the entire analysis pipeline, you need the code and configuration files from the repository, the input files and notebooks; the analyzed files will be generated as you run the pipeline. For information how to run this, refer to the readme. If you only want to generate the figures, you still need the code and configuration files from the repository, as it contains a package related to reading the result files; further, you need the analyzed files in addition to the input files. Of course, you can also run parts of the analysis pipeline and download the outputs for the rest. To run everything smoothly, the files have to be placed into the expected file structure. You can look up and configure the file structure in the configuration files. Below, we describe the default layout, which is very simple (<em>root</em> is where you placed the code from our repository and can be any location on your file system): Configuration files<em>: </em>Part of the repository. Placed into <em>root/working_dir/configs</em> Input data: Place into <em>root/working_dir/data</em>, then unzip in place <em>input_data.zip</em> -- Input data. Contains details on the model used in the manuscript and the output (spike times) of the simulation described in the manuscript. Within the file: For details, see readme Analyzed data: Place into <em>root/working_dir/data</em>, then unzip in place <em>classifier_features_results.zip </em>-- Output of the "classifier" step. Results of stimulus classification on the data in <em>features.zip</em> <em>classifier_manifold_result</em>s.zip -- Output of the "classifier" step. Results of stimulus classification on the data in <em>extracted_components.zip</em> <em>community_database.zip</em> -- Output of "gen_topo_db". Various topological parameters related to the close neighborhood of neurons in the model <em>extracted_components.zip </em>-- Output of "manifold_analysis". Results of factor analysis on the spike times in the <em>input_data</em> <em>features.zip</em> -- Output of "topological_featurization". A new dimensionality reduction method we introduce in the manuscript <em>split_spike_trains.zip -- </em>Output of "split_time_windows". The spike trains, split into time windows that are the responses to individual stimuli injected in the simulation <em>structural_parameters.zip</em><em> -- </em>Output of "Structural tribe analysis". Values for the topological parameters in <em>community_database.zip</em> associated with the neuron samples specified in <em>tribes.zip</em> <em>structural_parameters_vol.zip</em> -- Output of "Structural tribe analysis". Same as above, but for volumetric neuron samples. <em>triads.zip</em> -- Output of "Triad-counts". Over- and under-expression of triad motifs in the samples in <em>tribes.zip</em>. <em>tribes.zip</em><em> -- </em>Output of "sample_tribes". Specific neuron samples that are then analyzed further. Notebooks: Place into <em>root/notebooks</em> and unzip in place <em>notebooks.zip</em><em> -- </em>Jupyter notebooks. Run them to generate the figures in the manuscript. <strong>Updates:</strong> v1.1.0 (2020/12/11): Added some additional control cases to the results for figure 7. These results will probably not be updated on bioRxiv, but go into the submission to a journal. v1.2.0 (2021/10/05): Updated the notebooks.zip with changes we made in response to reviewers' feedback.