diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
new file mode 100644
index 000000000..50b6da15b
--- /dev/null
+++ b/CONTRIBUTING.md
@@ -0,0 +1,81 @@
+# Contributing to tedana
+
+Welcome to the tedana repository! We're excited you're here and want to contribute.
+
+These guidelines are designed to make it as easy as possible to get involved. If you have any questions that aren't discussed below, please let us know by opening an [issue][link_issues]!
+
+Before you start you'll need to set up a free [GitHub][link_github] account and sign in. Here are some [instructions][link_signupinstructions].
+
+### Labels
+
+The current list of labels are [here][link_labels] and include:
+
+* [![Help Wanted](https://img.shields.io/badge/-help%20wanted-159818.svg)][link_helpwanted] *These issues contain a task that a member of the team has determined we need additional help with.*
+
+ If you feel that you can contribute to one of these issues, we especially encourage you to do so!
+
+* [![Bugs](https://img.shields.io/badge/-bugs-fc2929.svg)][link_bugs] *These issues point to problems in the project.*
+
+ If you find new a bug, please give as much detail as possible in your issue, including steps to recreate the error.
+ If you experience the same bug as one already listed, please add any additional information that you have as a comment.
+
+* [![Requests](https://img.shields.io/badge/-requests-fbca04.svg)][link_requests] *These issues are asking for new features to be added to the project.*
+
+ Please try to make sure that your requested feature is distinct from any others that have already been requested or implemented. If you find one that's similar but there are subtle differences please reference the other request in your issue.
+
+## Making a change
+
+We appreciate all contributions to tedana, but those accepted fastest will follow a workflow similar to the following:
+
+**1. Comment on an existing issue or open a new issue referencing your addition.**
+
+This allows other members of the tedana development team to confirm that you aren't overlapping with work that's currently underway and that everyone is on the same page with the goal of the work you're going to carry out.
+
+[This blog][link_pushpullblog] is a nice explanation of why putting this work in up front is so useful to everyone involved.
+
+**2. [Fork][link_fork] the [tedana repository][link_tedana] to your profile.**
+
+This is now your own unique copy of tedana. Changes here won't effect anyone else's work, so it's a safe space to explore edits to the code!
+
+Make sure to [keep your fork up to date][link_updateupstreamwiki] with the master repository.
+
+**3. Make the changes you've discussed.**
+
+Try to keep the changes focused. We've found that working on a [new branch][link_branches] makes it easier to keep your changes targeted.
+
+**4. Submit a [pull request][link_pullrequest].**
+
+A member of the development team will review your changes to confirm that they can be merged into the main codebase.
+
+## Recognizing contributions
+
+We welcome and recognize all contributions from documentation to testing to code development.
+You can see a list of current contributors in the [contributors tab][link_contributors].
+
+## Thank you!
+
+You're awesome. :wave::smiley:
+
+
+
+*— Based on contributing guidelines from the [STEMMRoleModels][link_stemmrolemodels] project.*
+
+[link_github]: https://github.com/
+[link_tedana]: https://github.com/ME-ICA/tedana
+[link_signupinstructions]: https://help.github.com/articles/signing-up-for-a-new-github-account
+[link_react]: https://github.com/blog/2119-add-reactions-to-pull-requests-issues-and-comments
+[link_issues]: https://github.com/ME-ICA/tedana/issues
+[link_labels]: https://github.com/ME-ICA/tedana/labels
+[link_discussingissues]: https://help.github.com/articles/discussing-projects-in-issues-and-pull-requests
+
+[link_bugs]: https://github.com/ME-ICA/tedana/labels/bug
+[link_helpwanted]: https://github.com/ME-ICA/tedana/labels/help%20wanted
+[link_requests]: https://github.com/ME-ICA/tedana/labels/requests
+
+[link_pullrequest]: https://help.github.com/articles/creating-a-pull-request/
+[link_fork]: https://help.github.com/articles/fork-a-repo/
+[link_pushpullblog]: https://www.igvita.com/2011/12/19/dont-push-your-pull-requests/
+[link_branches]: https://help.github.com/articles/creating-and-deleting-branches-within-your-repository/
+[link_updateupstreamwiki]: https://help.github.com/articles/syncing-a-fork/
+[link_stemmrolemodels]: https://github.com/KirstieJane/STEMMRoleModels
+[link_contributors]: https://github.com/ME-ICA/tedana/graphs/contributors
diff --git a/Code_of_Conduct.md b/Code_of_Conduct.md
new file mode 100644
index 000000000..114c44fa4
--- /dev/null
+++ b/Code_of_Conduct.md
@@ -0,0 +1,60 @@
+# Code of Conduct
+
+In the interest of fostering an open and welcoming environment we want
+participation in our project and our community to be a harassment-free
+experience for everyone.
+
+Although no list can hope to be all-encompassing, we explicitly honor diversity in age,
+body size, disability, ethnicity, gender identity and expression, level of experience, native language, education, socio-economic status, nationality, personal appearance, race, religion,
+or sexual identity and orientation.
+
+## Our Standards
+
+We aim to promote behavior that contributes to a positive and welcoming environment.
+Examples of such behavior include:
+
+* Using inclusive language
+* Being respectful of differing viewpoints and experiences
+* Showing empathy towards other community members
+
+We do not tolerate harassment or other, inappropriate behavior in our community.
+Examples of unacceptable behavior by participants include:
+
+* The use of sexualized language or imagery and unwelcome sexual attention or
+ advances
+* Personal or political attacks on contributors, and insulting or derogatory
+ comments on contributed code with the intent to undermine contributions
+* Public or private harassment
+
+## Our Responsibilities
+
+The maintainers are responsible for clarifying the standards
+of acceptable behavior and are expected to take appropriate and fair corrective
+action in response to any instances of unacceptable behavior.
+
+The maintainers have the right and responsibility to remove,
+edit, or reject comments, commits, code, issues, and other contributions
+that are not aligned to this Code of Conduct, or to ban temporarily or
+permanently any contributor for other behaviors that they deem inappropriate,
+threatening, offensive, or harmful.
+
+## Scope
+
+This Code of Conduct applies both within our online GitHub repository
+and in public spaces when an individual is representing the project or its community.
+Examples of representing a project or community include using an official project e-mail
+address, posting via an official social media account, or acting as an appointed
+representative at an online or offline event.
+
+## Enforcement
+
+Instances of abusive, harassing, or otherwise unacceptable behavior may be
+reported by contacting Elizabeth DuPre at elizabeth.dupre@mail.mcgill.ca.
+Confidentiality will be respected in reporting.
+
+## Attribution
+
+This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
+available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
+
+[homepage]: https://www.contributor-covenant.org
diff --git a/README.md b/README.md
index ab6b999db..a09a8de47 100644
--- a/README.md
+++ b/README.md
@@ -1,2 +1,42 @@
# tedana
-TE-Dependent Analysis
+
+TE-Dependent Analysis (_tedana_) is a Python module for denoising multi-echo fMRI data.
+
+tedana is part of the ME-ICA pipeline, and therefore assumes that you're working with already preprocessed data. If you're in need of a preprocessing pipeline, we recommend [FMRIPREP](https://github.com/poldracklab/fmriprep/), which has been tested for compatibility with multi-echo fMRI data.
+
+## About
+
+Multi-echo fMRI data collection entails acquires multiple TEs (commonly called [echo times](http://mriquestions.com/tr-and-te.html)) for each collected fMRI volume. Our signal of interest, Blood Oxygen-Level Dependent or [BOLD signal](http://www.fil.ion.ucl.ac.uk/spm/course/slides10-zurich/Kerstin_BOLD.pdf), is known to decay at a set rate within each fMRI volume. Collecting multiple echos therefore allows us to infer if components of fMRI signal are BOLD-related or driven by acquisition artifacts, like partipant motion. For a review, see [Kundu et al. (2017), _NeuroImage_](https://paperpile.com/shared/eH3PPu).
+
+In tedana, we combine all collected echos, then decompose the resulting timeseries into components that can be classified as BOLD or non-BOLD based. This is performed in a series of steps including:
+
+* Principle components analysis
+* Independent components analysis
+* Component classification
+
+More information and documentation can be found at https://tedana.readthedocs.io/.
+
+## Installation
+
+You'll need to set up a working development environment to use tedana. We provide a Dockerfile for this purpose (check out [tips on using Docker](https://neurohackweek.github.io/docker-for-scientists/)), or you can set up your environment locally. If you choose the latter, make sure the following packages are installed:
+
+mdp
+nilearn
+nibabel>=2.1.0
+numpy
+pybids>=0.4.0
+scikit-learn
+
+tedana will eventually be hosted on PyPi. In the mean time, you can still install it with `pip` using:
+
+```
+pip install https://github.com/ME-ICA/tedana/archive/master.tar.gz
+```
+
+## Development
+
+We :yellow_heart: new contributors ! To get started, check out [our contributing guidelines](https://github.com/emdupre/tedana/blob/master/CONTRIBUTING.md).
+
+Want to learn more about our plans for developing tedana ? Check out [our roadmap](https://github.com/emdupre/tedana/projects). Have a question, comment, or suggestion ? Open or comment on one of [our issues](https://github.com/emdupre/tedana/issues) !
+
+We ask that all contributions to tedana respect our [code of conduct](https://github.com/emdupre/tedana/blob/master/Code_of_Conduct.md).
diff --git a/config.yml b/config.yml
new file mode 100644
index 000000000..477797f6d
--- /dev/null
+++ b/config.yml
@@ -0,0 +1,43 @@
+# Python CircleCI 2.0 configuration file
+#
+# Check https://circleci.com/docs/2.0/language-python/ for more details
+#
+version: 2
+jobs:
+ build:
+ docker:
+ - image: emdupre/meica-docker:0.0.3
+
+ working_directory: /home/neuro/code/
+
+ steps:
+ - checkout
+
+ - run:
+ name: download data
+ command: curl -L -o /home/neuro/data/zcat_ffd.nii.gz https://www.dropbox.com/s/ljeqskdmnc6si9d/zcat_ffd.nii.gz?dl=0
+
+ - run:
+ name: download and unzip processed data
+ command: |
+ curl -L -o /home/neuro/data/test_res.tar.gz https://www.dropbox.com/s/yzswu6ljnfxlhyo/test_res.tar.gz?dl=0
+ tar -xvzf /home/neuro/data/test_res.tar.gz -C /home/neuro/data
+
+
+ - run:
+ name: install dependencies
+ command: |
+ conda create --name venv python=3.6
+ source activate venv
+ pip install pytest
+ pip install -r requirements.txt
+ python setup.py install
+
+ # run tests!
+ - run:
+ name: run tests
+ command: |
+ source activate venv
+ py.test ./tedana/tests/test_tedana.py
+ no_output_timeout: "40m"
+ post: bash <(curl -s https://codecov.io/bash)
diff --git a/docs/Makefile b/docs/Makefile
new file mode 100644
index 000000000..2c6bb4d55
--- /dev/null
+++ b/docs/Makefile
@@ -0,0 +1,20 @@
+# Minimal makefile for Sphinx documentation
+#
+
+# You can set these variables from the command line.
+SPHINXOPTS =
+SPHINXBUILD = python -msphinx
+SPHINXPROJ = meica
+SOURCEDIR = .
+BUILDDIR = _build
+
+# Put it first so that "make" without argument is like "make help".
+help:
+ @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
+
+.PHONY: help Makefile
+
+# Catch-all target: route all unknown targets to Sphinx using the new
+# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
+%: Makefile
+ @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
diff --git a/docs/_static/theme_overrides.css b/docs/_static/theme_overrides.css
new file mode 100644
index 000000000..63ee6cc74
--- /dev/null
+++ b/docs/_static/theme_overrides.css
@@ -0,0 +1,13 @@
+/* override table width restrictions */
+@media screen and (min-width: 767px) {
+
+ .wy-table-responsive table td {
+ /* !important prevents the common CSS stylesheets from overriding
+ this as on RTD they are loaded after this stylesheet */
+ white-space: normal !important;
+ }
+
+ .wy-table-responsive {
+ overflow: visible !important;
+ }
+}
diff --git a/docs/conf.py b/docs/conf.py
new file mode 100644
index 000000000..e99cc5d73
--- /dev/null
+++ b/docs/conf.py
@@ -0,0 +1,180 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+#
+# tedana documentation build configuration file, created by
+# sphinx-quickstart
+#
+# This file is execfile()d with the current directory set to its
+# containing dir.
+#
+# Note that not all possible configuration values are present in this
+# autogenerated file.
+#
+# All configuration values have a default; values that are commented out
+# serve to show the default.
+
+# If extensions (or modules to document with autodoc) are in another directory,
+# add these directories to sys.path here. If the directory is relative to the
+# documentation root, use os.path.abspath to make it absolute, like shown here.
+#
+import os
+import sys
+sys.path.insert(0, os.path.abspath('../tedana'))
+
+
+# -- General configuration ------------------------------------------------
+
+# If your documentation needs a minimal Sphinx version, state it here.
+#
+# needs_sphinx = '1.0'
+
+# Add any Sphinx extension module names here, as strings. They can be
+# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
+# ones.
+extensions = ['sphinx.ext.autodoc',
+ 'sphinx.ext.autodoc',
+ 'sphinx.ext.napoleon',
+ 'sphinxarg.ext']
+
+# Add any paths that contain templates here, relative to this directory.
+templates_path = ['_templates']
+
+# The suffix(es) of source filenames.
+# You can specify multiple suffix as a list of string:
+#
+# source_suffix = ['.rst', '.md']
+source_suffix = '.rst'
+
+# The master toctree document.
+master_doc = 'index'
+
+# General information about the project.
+project = 'tedana'
+copyright = '2017-2018, Elizabeth DuPre'
+author = 'Elizabeth DuPre'
+
+# The version info for the project you're documenting, acts as replacement for
+# |version| and |release|, also used in various other places throughout the
+# built documents.
+#
+# The short X.Y version.
+version = ''
+# The full version, including alpha/beta/rc tags.
+release = ''
+
+# The language for content autogenerated by Sphinx. Refer to documentation
+# for a list of supported languages.
+#
+# This is also used if you do content translation via gettext catalogs.
+# Usually you set "language" from the command line for these cases.
+language = None
+
+# List of patterns, relative to source directory, that match files and
+# directories to ignore when looking for source files.
+# This patterns also effect to html_static_path and html_extra_path
+exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
+
+# The name of the Pygments (syntax highlighting) style to use.
+pygments_style = 'sphinx'
+
+# If true, `todo` and `todoList` produce output, else they produce nothing.
+todo_include_todos = False
+
+
+# -- Options for HTML output ----------------------------------------------
+
+# The theme to use for HTML and HTML Help pages. See the documentation for
+# a list of builtin themes.
+#
+# installing theme package
+import sphinx_rtd_theme
+html_theme = 'sphinx_rtd_theme'
+
+# Theme options are theme-specific and customize the look and feel of a theme
+# further. For a list of options available for each theme, see the
+# documentation.
+#
+# html_theme_options = {}
+
+# Add any paths that contain custom static files (such as style sheets) here,
+# relative to this directory. They are copied after the builtin static files,
+# so a file named "default.css" will overwrite the builtin "default.css".
+html_static_path = ['_static']
+
+# html_context = {
+# 'css_files': [
+# '_static/theme_overrides.css', # override wide tables in RTD theme
+# ],
+# }
+
+# Custom sidebar templates, must be a dictionary that maps document names
+# to template names.
+#
+# This is required for the alabaster theme
+# refs: http://alabaster.readthedocs.io/en/latest/installation.html#sidebars
+html_sidebars = {
+ '**': [
+ 'about.html',
+ 'navigation.html',
+ 'relations.html', # needs 'show_related': True theme option to display
+ 'searchbox.html',
+ 'donate.html',
+ ]
+}
+
+
+# -- Options for HTMLHelp output ------------------------------------------
+
+# Output file base name for HTML help builder.
+htmlhelp_basename = 'tedanadoc'
+
+
+# -- Options for LaTeX output ---------------------------------------------
+
+latex_elements = {
+ # The paper size ('letterpaper' or 'a4paper').
+ #
+ # 'papersize': 'letterpaper',
+
+ # The font size ('10pt', '11pt' or '12pt').
+ #
+ # 'pointsize': '10pt',
+
+ # Additional stuff for the LaTeX preamble.
+ #
+ # 'preamble': '',
+
+ # Latex figure (float) alignment
+ #
+ # 'figure_align': 'htbp',
+}
+
+# Grouping the document tree into LaTeX files. List of tuples
+# (source start file, target name, title,
+# author, documentclass [howto, manual, or own class]).
+latex_documents = [
+ (master_doc, 'tedana.tex', 'yedana Documentation',
+ 'Elizabeth DuPre, Prantik Kundu', 'manual'),
+]
+
+
+# -- Options for manual page output ---------------------------------------
+
+# One entry per manual page. List of tuples
+# (source start file, name, description, authors, manual section).
+man_pages = [
+ (master_doc, 'tedana', 'tedana Documentation',
+ [author], 1)
+]
+
+
+# -- Options for Texinfo output -------------------------------------------
+
+# Grouping the document tree into Texinfo files. List of tuples
+# (source start file, target name, title, author,
+# dir menu entry, description, category)
+texinfo_documents = [
+ (master_doc, 'tedana', 'tedana Documentation',
+ author, 'tedana', 'One line description of project.',
+ 'Miscellaneous'),
+]
diff --git a/docs/index.rst b/docs/index.rst
new file mode 100644
index 000000000..422faa119
--- /dev/null
+++ b/docs/index.rst
@@ -0,0 +1,51 @@
+.. include::
+
+
+tedana: TE Dependent Analysis
+==================================================
+
+The ``tedana`` package is part of the ME-ICA pipeline, performing TE-dependent
+analysis of multi-echo functional magnetic resonance imaging (fMRI) data.
+
+.. image:: https://circleci.com/gh/emdupre/me-ica/tree/py3.svg?style=shield&circle-token=:circle-token
+ :target: https://circleci.com/gh/emdupre/me-ica/tree/py3
+
+.. image:: http://img.shields.io/badge/License-LGPL%202.0-blue.svg
+ :target: https://opensource.org/licenses/LGPL-2.1
+
+Citations
+---------
+
+When using tedana, please include the following citations:
+
+Kundu, P., Inati, S.J., Evans, J.W., Luh, W.M. & Bandettini, P.A.
+(2011).
+`Differentiating BOLD and non-BOLD signals in fMRI time series using multi-echo EPI.`_
+*NeuroImage*, *60*, 1759-1770.
+
+.. _Differentiating BOLD and non-BOLD signals in fMRI time series using multi-echo EPI.: http://dx.doi.org/10.1016/j.neuroimage.2011.12.028
+
+License Information
+-------------------
+
+tedana is licensed under GNU Lesser General Public License version 2.1.
+All trademarks referenced herein are property of their respective holders.
+
+Copyright |copy| 2011-2017, Prantik Kundu. All rights reserved.
+
+
+.. toctree::
+ :maxdepth: 2
+ :caption: Contents:
+
+ introduction
+ installation
+ usage
+
+
+Indices and tables
+------------------
+
+* :ref:`genindex`
+* :ref:`modindex`
+* :ref:`search`
diff --git a/docs/installation.rst b/docs/installation.rst
new file mode 100644
index 000000000..451d2495c
--- /dev/null
+++ b/docs/installation.rst
@@ -0,0 +1,35 @@
+Installation
+============
+
+We recommend running `containerized versions`_ of ``meica`` to avoid dependency issues.
+The `Docker Engine`_ appropriate for your system (i.e., linux, Mac OSX, or Windows) is required to access and run the container.
+
+.. _Docker Engine: https://docs.docker.com/engine/installation/
+
+It is also possible to run a local or "bare-metal" ``meica`` if your system has `AFNI`_ and python 2.7 or 3.4+ installed.
+With a local python installation, we recommend using the `Anaconda`_ or `Miniconda`_ package manager, as these allow for the creation of `virtual environments`_.
+
+.. _AFNI: https://afni.nimh.nih.gov/
+.. _Anaconda: https://docs.continuum.io/anaconda/install/
+.. _Miniconda: https://conda.io/miniconda.html
+.. _virtual environments: https://uoa-eresearch.github.io/eresearch-cookbook/recipe/2014/11/20/conda/
+
+Containerized versions
+----------------------
+
+To access a containerized version of ``meica`` simply pull the latest Docker image.
+This can be accomplished with the following command:
+
+``docker pull emdupre/meica:0.0.1``
+
+Local installation
+------------------
+
+Local installation requires the following dependencies:
+
+* Python 2.7 or 3.4+
+* AFNI
+
+You can download ``meica`` to your local environment with the command ``pip install meica``.
+Alternatively, for "bleeding-edge" features you can clone the latest version of ``meica`` directly from GitHub.
+To do so, ``git clone https://github.com/me-ica/me-ica.git``.
diff --git a/docs/introduction.rst b/docs/introduction.rst
new file mode 100644
index 000000000..3f31d4b4e
--- /dev/null
+++ b/docs/introduction.rst
@@ -0,0 +1,28 @@
+Introduction
+============
+
+``tedana`` works in the following steps:
+
+#. Computes PCA and ICA in conjunction with TE-dependence analysis
+
+Derivatives
+-----------
+
+* ``medn``
+ 'Denoised' BOLD time series after: basic preprocessing,
+ T2* weighted averaging of echoes (i.e. 'optimal combination'),
+ ICA denoising.
+ Use this dataset for task analysis and resting state time series correlation analysis.
+* ``tsoc``
+ 'Raw' BOLD time series dataset after: basic preprocessing
+ and T2* weighted averaging of echoes (i.e. 'optimal combination').
+ 'Standard' denoising or task analyses can be assessed on this dataset
+ (e.g. motion regression, physio correction, scrubbing, etc.)
+ for comparison to ME-ICA denoising.
+* ``*mefc``
+ Component maps (in units of \delta S) of accepted BOLD ICA components.
+ Use this dataset for ME-ICR seed-based connectivity analysis.
+* ``mefl``
+ Component maps (in units of \delta S) of ALL ICA components.
+* ``ctab``
+ Table of component Kappa, Rho, and variance explained values, plus listing of component classifications.
diff --git a/docs/make.bat b/docs/make.bat
new file mode 100644
index 000000000..9e3a58643
--- /dev/null
+++ b/docs/make.bat
@@ -0,0 +1,36 @@
+@ECHO OFF
+
+pushd %~dp0
+
+REM Command file for Sphinx documentation
+
+if "%SPHINXBUILD%" == "" (
+ set SPHINXBUILD=python -msphinx
+)
+set SOURCEDIR=.
+set BUILDDIR=_build
+set SPHINXPROJ=meica
+
+if "%1" == "" goto help
+
+%SPHINXBUILD% >NUL 2>NUL
+if errorlevel 9009 (
+ echo.
+ echo.The Sphinx module was not found. Make sure you have Sphinx installed,
+ echo.then set the SPHINXBUILD environment variable to point to the full
+ echo.path of the 'sphinx-build' executable. Alternatively you may add the
+ echo.Sphinx directory to PATH.
+ echo.
+ echo.If you don't have Sphinx installed, grab it from
+ echo.http://sphinx-doc.org/
+ exit /b 1
+)
+
+%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%
+goto end
+
+:help
+%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%
+
+:end
+popd
diff --git a/docs/source/meica.rst b/docs/source/meica.rst
new file mode 100644
index 000000000..c43869c4e
--- /dev/null
+++ b/docs/source/meica.rst
@@ -0,0 +1,53 @@
+meica package
+=============
+
+Subpackages
+-----------
+
+.. toctree::
+
+ meica.tests
+
+Submodules
+----------
+
+meica\.alignp\_mepi\_anat module
+--------------------------------
+
+.. automodule:: meica.alignp_mepi_anat
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
+meica\.meica module
+-------------------
+
+.. automodule:: meica.meica
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
+meica\.t2smap module
+--------------------
+
+.. automodule:: meica.t2smap
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
+meica\.tedana module
+--------------------
+
+.. automodule:: meica.tedana
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
+
+Module contents
+---------------
+
+.. automodule:: meica
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/docs/source/meica.tests.rst b/docs/source/meica.tests.rst
new file mode 100644
index 000000000..cfcc106f5
--- /dev/null
+++ b/docs/source/meica.tests.rst
@@ -0,0 +1,22 @@
+meica\.tests package
+====================
+
+Submodules
+----------
+
+meica\.tests\.test\_meica module
+--------------------------------
+
+.. automodule:: meica.tests.test_meica
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
+
+Module contents
+---------------
+
+.. automodule:: meica.tests
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/docs/source/modules.rst b/docs/source/modules.rst
new file mode 100644
index 000000000..85619db8f
--- /dev/null
+++ b/docs/source/modules.rst
@@ -0,0 +1,7 @@
+meica
+=====
+
+.. toctree::
+ :maxdepth: 4
+
+ meica
diff --git a/docs/usage.rst b/docs/usage.rst
new file mode 100644
index 000000000..6e5322a60
--- /dev/null
+++ b/docs/usage.rst
@@ -0,0 +1,20 @@
+Usage
+=====
+
+tedana minimally requires:
+
+#. acquired echo times (in milliseconds), and
+#. functional datasets equal to the number of acquired echoes.
+
+But you can supply many other options, viewable with ``tedana -h``.
+
+Command line options
+--------------------
+.. argparse::
+ :ref: tedana.get_parser
+ :prog: tedana
+ :nodefault:
+ :nodefaultconst:
+
+.. tip:: FWHM smoothing is not recommended.
+ tSNR boost is provided by optimal combination of echoes.
diff --git a/requirements.txt b/requirements.txt
new file mode 100644
index 000000000..bacd8a123
--- /dev/null
+++ b/requirements.txt
@@ -0,0 +1,8 @@
+numpy
+scikit-learn
+mdp
+nilearn
+nibabel>=2.1.0
+pybids>=0.4.0
+nipype
+duecredit
\ No newline at end of file
diff --git a/setup.py b/setup.py
new file mode 100644
index 000000000..735306881
--- /dev/null
+++ b/setup.py
@@ -0,0 +1,54 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+# @Author: oesteban
+""" meica setup script """
+
+
+def main():
+ """ Install entry-point """
+ from io import open
+ from os import path as op
+ from inspect import getfile, currentframe
+ from setuptools import setup, find_packages
+
+ this_path = op.dirname(op.abspath(getfile(currentframe())))
+
+ # For Python 3: use a locals dictionary
+ # http://stackoverflow.com/a/1463370/6820620
+ ldict = locals()
+ # Get version and release info, which is all stored in meica/info.py
+ module_file = op.join(this_path, 'tedana', 'info.py')
+ with open(module_file) as infofile:
+ pythoncode = [line for line in infofile.readlines() if not line.strip().startswith('#')]
+ exec('\n'.join(pythoncode), globals(), ldict)
+
+ setup(
+ name=ldict['__packagename__'],
+ version=ldict['__version__'],
+ description=ldict['__description__'],
+ long_description=ldict['__longdesc__'],
+ author=ldict['__author__'],
+ author_email=ldict['__email__'],
+ maintainer=ldict['__maintainer__'],
+ maintainer_email=ldict['__email__'],
+ url=ldict['__url__'],
+ license=ldict['__license__'],
+ classifiers=ldict['CLASSIFIERS'],
+ download_url=ldict['DOWNLOAD_URL'],
+ # Dependencies handling
+ install_requires=ldict['REQUIRES'],
+ tests_require=ldict['TESTS_REQUIRES'],
+ extras_require=ldict['EXTRA_REQUIRES'],
+ entry_points={'console_scripts': [
+ 'ama=meica.cli.run_alignp_mepi_anat:main',
+ 'meica=meica.cli.run_meica:main',
+ 't2smap=meica.cli.run_t2smap:main',
+ 'tedana=meica.cli.run_tedana:main'
+ ]},
+ packages=find_packages(exclude=("tests",)),
+ zip_safe=False,
+ )
+
+
+if __name__ == '__main__':
+ main()
diff --git a/tedana/__init__.py b/tedana/__init__.py
new file mode 100644
index 000000000..ce4a98c4c
--- /dev/null
+++ b/tedana/__init__.py
@@ -0,0 +1,36 @@
+# -*- coding: utf-8 -*-
+# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-
+# vi: set ft=python sts=4 ts=4 sw=4 et:
+"""
+tedana: A Python package for TE-dependent analysis of multi-echo data.
+"""
+
+from .due import (due, Doi)
+
+from .info import (
+ __version__,
+ __author__,
+ __copyright__,
+ __credits__,
+ __license__,
+ __maintainer__,
+ __email__,
+ __status__,
+ __url__,
+ __packagename__,
+ __description__,
+ __longdesc__
+)
+
+import warnings
+
+# cmp is not used, so ignore nipype-generated warnings
+warnings.filterwarnings('ignore', r'cmp not installed')
+
+# Citation for the algorithm.
+due.cite(Doi('10.1016/j.neuroimage.2011.12.028'),
+ description='Introduces MEICA and tedana.',
+ version=__version__, path='tedana', cite_module=True)
+due.cite(Doi('10.1073/pnas.1301725110'),
+ description='Improves MEICA and tedana.',
+ version=__version__, path='tedana', cite_module=True)
diff --git a/tedana/cli/__init__.py b/tedana/cli/__init__.py
new file mode 100644
index 000000000..b1c932756
--- /dev/null
+++ b/tedana/cli/__init__.py
@@ -0,0 +1,2 @@
+# emacs: -*- mode: python-mode; py-indent-offset: 4; tab-width: 4; indent-tabs-mode: nil -*-
+# ex: set sts=4 ts=4 sw=4 et:
diff --git a/tedana/cli/run.py b/tedana/cli/run.py
new file mode 100644
index 000000000..4448ea811
--- /dev/null
+++ b/tedana/cli/run.py
@@ -0,0 +1,120 @@
+import argparse
+from tedana.interfaces import tedana
+
+
+def get_parser():
+ """
+ Parses command line inputs for tedana
+ Returns
+ -------
+ parser.parse_args() : argparse dict
+ """
+ parser = argparse.ArgumentParser()
+ parser.add_argument('-d',
+ dest='data',
+ nargs='+',
+ help='Spatially Concatenated Multi-Echo Dataset',
+ required=True)
+ parser.add_argument('-e',
+ dest='tes',
+ nargs='+',
+ help='Echo times (in ms) ex: 15,39,63',
+ required=True)
+ parser.add_argument('--mix',
+ dest='mixm',
+ help='Mixing matrix. If not provided, ' +
+ 'ME-PCA & ME-ICA is done.',
+ default=None)
+ parser.add_argument('--ctab',
+ dest='ctab',
+ help='Component table extract pre-computed ' +
+ 'classifications from.',
+ default=None)
+ parser.add_argument('--manacc',
+ dest='manacc',
+ help='Comma separated list of manually ' +
+ 'accepted components',
+ default=None)
+ parser.add_argument('--strict',
+ dest='strict',
+ action='store_true',
+ help='Ignore low-variance ambiguous components',
+ default=False)
+ parser.add_argument('--no_gscontrol',
+ dest='no_gscontrol',
+ action='store_true',
+ help='Control global signal using spatial approach',
+ default=False)
+ parser.add_argument('--kdaw',
+ dest='kdaw',
+ help='Dimensionality augmentation weight ' +
+ '(Kappa). Default 10. -1 for low-dimensional ICA',
+ default=10.)
+ parser.add_argument('--rdaw',
+ dest='rdaw',
+ help='Dimensionality augmentation weight (Rho). ' +
+ 'Default 1. -1 for low-dimensional ICA',
+ default=1.)
+ parser.add_argument('--conv',
+ dest='conv',
+ help='Convergence limit. Default 2.5e-5',
+ default='2.5e-5')
+ parser.add_argument('--sourceTEs',
+ dest='ste',
+ help='Source TEs for models. ex: -ste 0 for all, ' +
+ '-1 for opt. com. Default -1.',
+ default=-1)
+ parser.add_argument('--combmode',
+ dest='combmode',
+ help='Combination scheme for TEs: t2s ' +
+ '(Posse 1999, default),ste(Poser)',
+ default='t2s')
+ parser.add_argument('--denoiseTEs',
+ dest='dne',
+ action='store_true',
+ help='Denoise each TE dataset separately',
+ default=False)
+ parser.add_argument('--initcost',
+ dest='initcost',
+ help='Initial cost func. for ICA: pow3, ' +
+ 'tanh(default), gaus, skew',
+ default='tanh')
+ parser.add_argument('--finalcost',
+ dest='finalcost',
+ help='Final cost func, same opts. as initial',
+ default='tanh')
+ parser.add_argument('--stabilize',
+ dest='stabilize',
+ action='store_true',
+ help='Stabilize convergence by reducing ' +
+ 'dimensionality, for low quality data',
+ default=False)
+ parser.add_argument('--fout',
+ dest='fout',
+ help='Output TE-dependence Kappa/Rho SPMs',
+ action='store_true',
+ default=False)
+ parser.add_argument('--filecsdata',
+ dest='filecsdata',
+ help='Save component selection data',
+ action='store_true',
+ default=False)
+ parser.add_argument('--label',
+ dest='label',
+ help='Label for output directory.',
+ default=None)
+ parser.add_argument('--seed',
+ dest='fixed_seed',
+ help='Seeded value for ICA, for reproducibility.',
+ default=42)
+ return parser
+
+
+def main(argv=None):
+ """Entry point"""
+ options = get_parser().parse_args(argv)
+ tedana.main(options)
+
+
+if __name__ == '__main__':
+ main()
diff --git a/tedana/due.py b/tedana/due.py
new file mode 100644
index 000000000..fe538b88d
--- /dev/null
+++ b/tedana/due.py
@@ -0,0 +1,67 @@
+# emacs: at the end of the file
+# ex: set sts=4 ts=4 sw=4 et:
+# ## ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### #
+"""
+Stub file for a guaranteed safe import of duecredit constructs: if duecredit
+is not available.
+To use it, place it into your project codebase to be imported, e.g. copy as
+ cp stub.py /path/tomodule/module/due.py
+Note that it might be better to avoid naming it duecredit.py to avoid shadowing
+installed duecredit.
+Then use in your code as
+ from .due import due, Doi, BibTeX
+See https://github.com/duecredit/duecredit/blob/master/README.md for examples.
+Origin: Originally a part of the duecredit
+Copyright: 2015-2016 DueCredit developers
+License: BSD-2
+"""
+
+from builtins import str
+from builtins import object
+__version__ = '0.0.5'
+
+
+class InactiveDueCreditCollector(object):
+ """Just a stub at the Collector which would not do anything"""
+ def _donothing(self, *args, **kwargs):
+ """Perform no good and no bad"""
+ pass
+
+ def dcite(self, *args, **kwargs):
+ """If I could cite I would"""
+ def nondecorating_decorator(func):
+ return func
+ return nondecorating_decorator
+
+ cite = load = add = _donothing
+
+ def __repr__(self):
+ return self.__class__.__name__ + '()'
+
+
+def _donothing_func(*args, **kwargs):
+ """Perform no good and no bad"""
+ pass
+
+
+try:
+ from duecredit import due, BibTeX, Doi, Url
+ if 'due' in locals() and not hasattr(due, 'cite'):
+ raise RuntimeError(
+ "Imported due lacks .cite. DueCredit is now disabled")
+except Exception as e:
+ if type(e).__name__ != 'ImportError':
+ import logging
+ logging.getLogger("duecredit").error(
+ "Failed to import duecredit due to %s" % str(e))
+ # Initiate due stub
+ due = InactiveDueCreditCollector()
+ BibTeX = Doi = Url = _donothing_func
+
+# Emacs mode definitions
+# Local Variables:
+# mode: python
+# py-indent-offset: 4
+# tab-width: 4
+# indent-tabs-mode: nil
+# End:
diff --git a/tedana/info.py b/tedana/info.py
new file mode 100644
index 000000000..cd49333c7
--- /dev/null
+++ b/tedana/info.py
@@ -0,0 +1,58 @@
+# -*- coding: utf-8 -*-
+# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-
+# vi: set ft=python sts=4 ts=4 sw=4 et:
+"""
+Base module variables
+"""
+
+__version__ = ''
+__author__ = 'tedana developers'
+__copyright__ = 'Copyright 2017, tedana developers'
+__credits__ = ['Elizabeth DuPre', 'Prantik Kundu', 'Ross Markello',
+ 'Taylor Salo', 'Kirstie Whitaker']
+__license__ = 'LGPL 2.0'
+__maintainer__ = 'Elizabeth DuPre'
+__email__ = 'emd222@cornell.edu'
+__status__ = 'Prototype'
+__url__ = 'https://github.com/me-ica/tedana'
+__packagename__ = 'tedana'
+__description__ = ("TE-Dependent Analysis (tedana) of multi-echo functional "
+ "magnetic resonance imaging (fMRI) data.")
+__longdesc__ = ("To do.")
+
+DOWNLOAD_URL = (
+ 'https://github.com/ME-ICA/{name}/archive/{ver}.tar.gz'.format(
+ name=__packagename__, ver=__version__))
+
+REQUIRES = [
+ 'numpy',
+ 'scikit-learn',
+ 'nilearn',
+ 'nibabel>=2.1.0',
+ 'pybids>=0.4.0',
+ 'nipype',
+]
+
+TESTS_REQUIRES = [
+ "codecov",
+ "pytest",
+]
+
+EXTRA_REQUIRES = {
+ 'doc': ['sphinx>=1.5.3', 'sphinx_rtd_theme', 'sphinx-argparse'],
+ 'tests': TESTS_REQUIRES,
+ 'duecredit': ['duecredit'],
+}
+
+# Enable a handle to install all extra dependencies at once
+EXTRA_REQUIRES['all'] = [val for _, val in list(EXTRA_REQUIRES.items())]
+
+# Package classifiers
+CLASSIFIERS = [
+ 'Development Status :: 2 - Pre-Alpha',
+ 'Intended Audience :: Science/Research',
+ 'Topic :: Scientific/Engineering :: Image Recognition',
+ 'License :: OSI Approved :: LGPL License',
+ 'Programming Language :: Python :: 2.7',
+ 'Programming Language :: Python :: 3.6',
+]
diff --git a/tedana/interfaces/__init__.py b/tedana/interfaces/__init__.py
new file mode 100644
index 000000000..b1c932756
--- /dev/null
+++ b/tedana/interfaces/__init__.py
@@ -0,0 +1,2 @@
+# emacs: -*- mode: python-mode; py-indent-offset: 4; tab-width: 4; indent-tabs-mode: nil -*-
+# ex: set sts=4 ts=4 sw=4 et:
diff --git a/tedana/interfaces/data.py b/tedana/interfaces/data.py
new file mode 100644
index 000000000..82ba8a4c5
--- /dev/null
+++ b/tedana/interfaces/data.py
@@ -0,0 +1,131 @@
+import os
+import nibabel as nib
+import numpy as np
+from tedana.utils.utils import make_opt_com
+
+
+class MultiEchoData():
+ """
+ Object for holding multi-echo data and related information (e.g., echo
+ times, header, mask)
+
+ Parameters
+ ----------
+ fname : str
+ Filepath to Z-concatenated multi-echo data image. Must be in a format
+ compatible with nibabel (see http://nipy.org/nibabel/ for more info on
+ currently supported filetypes).
+ tes : array_like
+ Echo times for `zcat`
+
+ Attributes
+ ----------
+ img, header : nibabel-object
+ Nibabel object containing loaded input data (i.e., `nib.load(fname)`)
+ tes : np.ndarray
+ Array containing echo times (in milliseconds) for input data
+ zcat : (X x Y x ZE x T) np.ndarray
+ Multi-echo data array, where individual echo time series are
+ concatenated along the third dimension (i.e., ZE)
+ catdata : (X x Y x Z x E x T) np.ndarray
+ Multi-echo data array, where X, Y, Z are spatial dimensions, E
+ corresponds to individual echo data, and T is time
+ mask : (X x Y x Z) np.ndarray, bool
+ Boolean array that encompasses points in `catdata` for which the
+ provided data has good signal across echoes
+ masksum : (X x Y x Z) np.ndarray, int
+ Array containing information about number of echoes that
+ contributed to generation of `mask`
+ """
+
+ def __init__(self, fname, tes):
+ self.fname = os.path.abspath(fname)
+ if not os.path.exists(fname):
+ raise IOError('Cannot find file: {0}. '.format(self.fname) +
+ 'Please ensure file exists on current system.')
+ try:
+ self.img = nib.load(fname)
+ except Exception as e:
+ print('Error loading file: {0}'.format(self.fname) +
+ 'Please ensure file is in a format compatible with nibabel.')
+ raise e
+
+ self.header = self.img.header
+ self.zcat = self.img.get_fdata()
+ self.tes = np.asarray(tes)
+
+ self._cat2echos()
+ self._gen_ad_mask()
+
+ def _cat2echos(self):
+ """
+ Uncatenates individual echo time series from Z-concatenated array
+ """
+
+ nx, ny, Ne = self.zcat.shape[:2], self.tes.size
+ nz = self.zcat.shape[2] // Ne
+
+ self.catdata = self.zcat.reshape(nx, ny, nz, Ne, -1, order='F')
+
+ def _gen_ad_mask(self):
+ """
+ Generates mask from `catdata`
+
+ By default, generates a 3D mask (boolean) for voxels with good signal
+ in at least one echo and an 3D array (integer) detailing how many
+ echoes have good signal in that voxel
+
+ Returns
+ -------
+ mask : (X x Y x Z) np.ndarray, bool
+ Boolean mask array
+ masksum : (X x Y x Z) np.ndarray, int
+ Value mask array
+ """
+
+ emeans = self.catdata.mean(axis=-1)
+ first_echo = emeans[:, :, :, 0]
+ perc33 = np.percentile(first_echo[first_echo.nonzero()], 33,
+ interpolation='higher')
+ medv = (first_echo == perc33)
+ lthrs = np.vstack([emeans[:, :, :, echo][medv] / 3 for echo in
+ range(self.tes.size)])
+ lthrs = lthrs[:, lthrs.sum(0).argmax()]
+ mthr = np.ones(self.catdata.shape[:-1])
+ for echo in range(self.tes.size):
+ mthr[:, :, :, echo] *= lthrs[echo]
+
+ self.masksum = (np.abs(emeans) > mthr).astype('int').sum(axis=-1)
+ self.mask = (self.masksum != 0)
+
+
+class T2Star():
+ """
+ Generates T2* map and optimal combination from multi-echo data
+
+ Parameters
+ ----------
+ medata : MultiEchoData
+ Instance of MultiEchoData() class
+
+ Attributes
+ ----------
+ optcom : (X x Y x Z) np.ndarray
+ t2s : (X x Y x Z) np.ndarray
+ s0 : (X x Y x Z) np.ndarray
+ t2ss : (X x Y x Z) np.ndarray
+ s0s : (X x Y x Z) np.ndarray
+ t2sG : (X x Y x Z) np.ndarray
+ s0G : (X x Y x Z) np.ndarray
+ """
+
+ def __init__(self, medata):
+ if not isinstance(medata, MultiEchoData):
+ raise TypeError("Input must be an instance of MultiEchoData.")
+ self.medata = medata
+
+ self._gen_t2sadmap()
+ self.optcom = make_opt_com()
+
+ def _gen_t2sadmap(self):
+ pass
diff --git a/tedana/interfaces/model.py b/tedana/interfaces/model.py
new file mode 100644
index 000000000..d20239129
--- /dev/null
+++ b/tedana/interfaces/model.py
@@ -0,0 +1,21 @@
+from tedana.interfaces.data import MultiEchoData
+
+
+class TEDModelFitter():
+ """
+ Object to fit TE-dependent models to input data and select valid components
+
+ ****************************************************************************
+ This could be built like an sklearn class where you call .fit() and even
+ .select() to fit models and select components
+ ****************************************************************************
+
+ Parameters
+ ----------
+ medata : tedana.interfaces.data.MultiEchoData
+ """
+
+ def __init__(self, medata):
+ if not isinstance(medata, MultiEchoData):
+ raise TypeError("Input must be an instance of MultiEchoData.")
+ self.medata = medata
diff --git a/tedana/interfaces/t2smap.py b/tedana/interfaces/t2smap.py
new file mode 100644
index 000000000..3017fad09
--- /dev/null
+++ b/tedana/interfaces/t2smap.py
@@ -0,0 +1,153 @@
+import numpy as np
+import nibabel as nib
+from tedana.utils.utils import (niwrite, cat2echos,
+ makeadmask, unmask, fmask)
+
+
+def t2sadmap(catd, mask, tes, masksum, start_echo):
+ """
+ t2sadmap(catd,mask,tes,masksum)
+
+ Input:
+
+ catd has shape (nx,ny,nz,Ne,nt)
+ mask has shape (nx,ny,nz)
+ tes is a 1d numpy array
+ masksum
+ """
+ nx, ny, nz, Ne, nt = catd.shape
+ echodata = fmask(catd, mask)
+ Nm = echodata.shape[0]
+
+ t2ss = np.zeros([nx, ny, nz, Ne - 1])
+ s0vs = t2ss.copy()
+
+ for ne in range(start_echo, Ne + 1):
+
+ # Do Log Linear fit
+ B = np.reshape(np.abs(echodata[:, :ne]) + 1, (Nm, ne * nt)).transpose()
+ B = np.log(B)
+ neg_tes = [-1 * te for te in tes[:ne]]
+ x = np.array([np.ones(ne), neg_tes])
+ X = np.tile(x, (1, nt))
+ X = np.sort(X)[:, ::-1].transpose()
+
+ beta, res, rank, sing = np.linalg.lstsq(X, B)
+ t2s = 1 / beta[1, :].transpose()
+ s0 = np.exp(beta[0, :]).transpose()
+
+ t2s[np.isinf(t2s)] = 500.
+ s0[np.isnan(s0)] = 0.
+
+ t2ss[:, :, :, ne - 2] = np.squeeze(unmask(t2s, mask))
+ s0vs[:, :, :, ne - 2] = np.squeeze(unmask(s0, mask))
+
+ # Limited T2* and S0 maps
+ fl = np.zeros([nx, ny, nz, len(tes) - 2 + 1])
+ for ne in range(Ne - 1):
+ fl_ = np.squeeze(fl[:, :, :, ne])
+ fl_[masksum == ne + 2] = True
+ fl[:, :, :, ne] = fl_
+ fl = np.array(fl, dtype=bool)
+ t2sa = np.squeeze(unmask(t2ss[fl], masksum > 1))
+ s0va = np.squeeze(unmask(s0vs[fl], masksum > 1))
+
+ # Full T2* maps with S0 estimation errors
+ t2saf = t2sa.copy()
+ s0vaf = s0va.copy()
+ t2saf[masksum == 1] = t2ss[masksum == 1, 0]
+ s0vaf[masksum == 1] = s0vs[masksum == 1, 0]
+
+ return t2sa, s0va, t2ss, s0vs, t2saf, s0vaf
+
+
+def optcom(data, t2, tes, mask, combmode, useG=False):
+ """
+ out = optcom(data,t2s)
+
+
+ Input:
+
+ data.shape = (nx,ny,nz,Ne,Nt)
+ t2s.shape = (nx,ny,nz)
+ tes.shape = len(Ne)
+
+ Output:
+
+ out.shape = (nx,ny,nz,Nt)
+ """
+ nx, ny, nz, Ne, Nt = data.shape
+
+ if useG:
+ fdat = fmask(data, mask)
+ ft2s = fmask(t2, mask)
+
+ else:
+ fdat = fmask(data, mask)
+ ft2s = fmask(t2, mask)
+
+ tes = np.array(tes)
+ tes = tes[np.newaxis, :]
+ ft2s = ft2s[:, np.newaxis]
+
+ if combmode == 'ste':
+ alpha = fdat.mean(-1) * tes
+ else:
+ alpha = tes * np.exp(-tes / ft2s)
+
+ alpha = np.tile(alpha[:, :, np.newaxis], (1, 1, Nt))
+
+ fout = np.average(fdat, axis=1, weights=alpha)
+ out = unmask(fout, mask)
+ print('Out shape is ', out.shape)
+ return out
+
+
+def main(options):
+ """
+ """
+ if options.label is not None:
+ suf = '_%s' % str(options.label)
+ else:
+ suf = ''
+
+ tes = [float(te) for te in options.tes]
+ ne = len(tes)
+ catim = nib.load(options.data[0])
+ head = catim.get_header()
+ head.extensions = []
+ head.set_sform(head.get_sform(), code=1)
+ aff = catim.get_affine()
+ catd = cat2echos(catim.get_data(), ne)
+ nx, ny, nz, Ne, nt = catd.shape
+
+ print("++ Computing Mask")
+ mask, masksum = makeadmask(catd, min=False, getsum=True)
+
+ print("++ Computing Adaptive T2* map")
+ t2s, s0, t2ss, s0vs, t2saf, s0vaf = t2sadmap(catd, mask, tes, masksum, 2)
+ niwrite(masksum, aff, 'masksum%s.nii' % suf)
+ niwrite(t2ss, aff, 't2ss%s.nii' % suf)
+ niwrite(s0vs, aff, 's0vs%s.nii' % suf)
+
+ print("++ Computing optimal combination")
+ tsoc = np.array(optcom(catd,
+ t2s,
+ tes,
+ mask,
+ options.combmode),
+ dtype=float)
+
+ # Clean up numerical errors
+ t2sm = t2s.copy()
+ for n in (tsoc, s0, t2s, t2sm):
+ np.nan_to_num(n, copy=False)
+
+ s0[s0 < 0] = 0
+ t2s[t2s < 0] = 0
+ t2sm[t2sm < 0] = 0
+
+ niwrite(tsoc, aff, 'ocv%s.nii' % suf)
+ niwrite(s0, aff, 's0v%s.nii' % suf)
+ niwrite(t2s, aff, 't2sv%s.nii' % suf)
+ niwrite(t2sm, aff, 't2svm%s.nii' % suf)
diff --git a/tedana/interfaces/tedana.py b/tedana/interfaces/tedana.py
new file mode 100644
index 000000000..746d43beb
--- /dev/null
+++ b/tedana/interfaces/tedana.py
@@ -0,0 +1,1351 @@
+import os
+import sys
+import gzip
+import pickle
+import numpy as np
+import nibabel as nib
+from sklearn import svm
+import scipy.stats as stats
+from tedana.interfaces.t2smap import (optcom, t2sadmap)
+from tedana.utils.utils import (cat2echos, uncat2echos, make_mask,
+ makeadmask, fmask, unmask,
+ fitgaussian, niwrite, dice, andb)
+
+
+"""
+PROCEDURE 2 : Computes ME-PCA and ME-ICA
+-Computes T2* map
+-Computes PCA of concatenated ME data, then computes TE-dependence of PCs
+-Computes ICA of TE-dependence PCs
+-Identifies TE-dependent ICs, outputs high-\kappa (BOLD) component
+ and denoised time series
+-or- Computes TE-dependence of each component of a general linear model
+ specified by input (includes MELODIC FastICA mixing matrix)
+PROCEDURE 2a: Model fitting and component selection routines
+"""
+
+F_MAX = 500
+Z_MAX = 8
+
+
+def do_svm(X_train, y_train, X_test, svmtype=0):
+ """
+ sklearn's Support Vector Classification (SVC).
+ For svmtype=1, implemented in liblinear rather than libsvm.
+
+ Parameters
+ ----------
+ X_train : {array-like, sparse matrix}, shape (n_samples, n_features)
+ Training vectors, where n_samples is the number of samples in the
+ training dataset and n_features is the number of features.
+ y_train : array-like, shape (n_samples,)
+ Target values (class labels in classification, real numbers in
+ regression)
+ X_test : {array-like, sparse matrix}, shape (n_samples, n_features)
+ Test vectors, where n_samples is the number of samples in the test
+ dataset and n_features is the number of features.
+ svmtype : int
+ Desired support vector machine type.
+
+ Returns
+ -------
+ y_pred : array, shape (n_samples,)
+ Predicted class labels for samples in X_test.
+ clf : {:obj:`sklearn.svm.classes.SVC`, :obj:`sklearn.svm.classes.LinearSVC`}
+ Trained sklearn model instance.
+ """
+
+ if svmtype == 0:
+ clf = svm.SVC(kernel='linear')
+ elif svmtype == 1:
+ clf = svm.LinearSVC(loss='squared_hinge', penalty='l1', dual=False)
+ elif svmtype == 2:
+ clf = svm.SVC(kernel='linear', probability=True)
+ else:
+ raise ValueError('Input svmtype not in range (3)')
+
+ clf.fit(X_train, y_train)
+ y_pred = clf.predict(X_test)
+
+ return y_pred, clf
+
+
+def spatclust(data, mask, csize, thr, header, aff, infile=None, dindex=0,
+ tindex=0):
+ """
+
+ Parameters
+ ----------
+ data :
+
+ mask :
+
+ csize :
+
+ thr :
+
+ header :
+
+ aff :
+
+ infile :
+
+ dindex :
+
+ tindex :
+
+
+ Returns
+ -------
+ clustered :
+
+
+ """
+ if infile is None:
+ data = data.copy()
+ data[data < thr] = 0
+ niwrite(unmask(data, mask), aff, '__clin.nii.gz', header)
+ infile = '__clin.nii.gz'
+ addopts = ''
+ if data is not None and len(np.squeeze(data).shape) > 1 and dindex + tindex == 0:
+ addopts = '-doall'
+ else:
+ addopts = '-1dindex {0} -1tindex {1}'.format(str(dindex), str(tindex))
+
+ cmd_str = '3dmerge -overwrite {0} -dxyz=1 -1clust 1 {1:d} -1thresh {2:.02f} -prefix __clout.nii.gz {3}'
+ os.system(cmd_str.format(addopts, int(csize), float(thr), infile))
+ clustered = fmask(nib.load('__clout.nii.gz').get_data(), mask) != 0
+ return clustered
+
+
+def rankvec(vals):
+ """Returns ranks of array.
+
+ Parameters
+ ----------
+ vals : array-like
+ 1d array from which to determine ranks.
+
+ Returns
+ -------
+ ranks : array-like
+ 1d array of ranks for values in input vals.
+ """
+ try:
+ vals = np.array(vals)
+ except:
+ raise IOError('Input vals is not array_like')
+
+ if len(vals.shape) != 1:
+ raise ValueError('Input vals is not 1d array')
+
+ asort = np.argsort(vals)
+ ranks = np.zeros(vals.shape[0])
+ ranks[asort] = np.arange(vals.shape[0]) + 1
+ return ranks
+
+
+def get_coeffs(data, mask, X, add_const=False):
+ """
+ get_coeffs(data,X)
+
+ Parameters
+ ----------
+ data : array-like
+ Array of shape (nx, ny, nz, nt)
+ mask : array-like
+ Array of shape (nx, ny, nz)
+ X : array-like
+ Array of shape (nt, nc)
+ add_const : bool, optional
+ Default is False.
+
+ Returns
+ -------
+ out : array_like
+ Array of shape (nx, ny, nz, nc)
+ """
+ mdata = fmask(data, mask).transpose()
+
+ # Coerce X to >=2d
+ X = np.atleast_2d(X)
+
+ if X.shape[0] == 1:
+ X = X.T
+ Xones = np.atleast_2d(np.ones(np.min(mdata.shape))).T
+ if add_const:
+ X = np.hstack([X, Xones])
+
+ tmpbetas = np.linalg.lstsq(X, mdata)[0].transpose()
+ if add_const:
+ tmpbetas = tmpbetas[:, :-1]
+ out = unmask(tmpbetas, mask)
+
+ return out
+
+
+def getelbow_cons(ks, val=False):
+ """Elbow using mean/variance method - conservative
+
+ Parameters
+ ----------
+ ks : array-like
+
+ val : bool, optional
+ Default is False
+
+ Returns
+ -------
+ array-like
+ Either the elbow index (if val is True) or the values at the elbow
+ index (if val is False)
+ """
+ ks = np.sort(ks)[::-1]
+ nk = len(ks)
+ temp1 = [(ks[nk-5-ii-1] > ks[nk-5-ii:nk].mean() + 2*ks[nk-5-ii:nk].std()) for ii in range(nk-5)]
+ ds = np.array(temp1[::-1], dtype=np.int)
+ dsum = []
+ c_ = 0
+ for d_ in ds:
+ c_ = (c_ + d_) * d_
+ dsum.append(c_)
+ e2 = np.argmax(np.array(dsum))
+ elind = np.max([getelbow_mod(ks), e2])
+
+ if val: return ks[elind]
+ else: return elind
+
+
+def getelbow_mod(ks, val=False):
+ """Elbow using linear projection method - moderate
+
+ Parameters
+ ----------
+ ks : array-like
+
+ val : bool, optional
+ Default is False
+
+ Returns
+ -------
+ array-like
+ Either the elbow index (if val is True) or the values at the elbow
+ index (if val is False)
+ """
+ ks = np.sort(ks)[::-1]
+ nc = ks.shape[0]
+ coords = np.array([np.arange(nc), ks])
+ p = coords - np.tile(np.reshape(coords[:, 0], (2, 1)), (1, nc))
+ b = p[:, -1]
+ b_hat = np.reshape(b / np.sqrt((b ** 2).sum()), (2, 1))
+ proj_p_b = p - np.dot(b_hat.T, p) * np.tile(b_hat, (1, nc))
+ d = np.sqrt((proj_p_b ** 2).sum(axis=0))
+ k_min_ind = d.argmax()
+ k_min = ks[k_min_ind]
+
+ if val: return ks[k_min_ind]
+ else: return k_min_ind
+
+
+def getelbow_aggr(ks, val=False):
+ """Elbow using curvature - aggressive
+
+ Parameters
+ ----------
+ ks : array-like
+
+ val : bool, optional
+ Default is False
+
+ Returns
+ -------
+ array-like
+ Either the elbow index (if val is True) or the values at the elbow
+ index (if val is False)
+ """
+ ks = np.sort(ks)[::-1]
+ dKdt = ks[:-1] - ks[1:]
+ dKdt2 = dKdt[:-1] - dKdt[1:]
+ curv = np.abs((dKdt2 / (1 + dKdt[:-1]**2.) ** (3. / 2.)))
+ curv[np.isnan(curv)] = -1 * 10**6
+ maxcurv = np.argmax(curv) + 2
+
+ if val: return(ks[maxcurv])
+ else:return maxcurv
+
+
+def getfbounds(ne):
+ """
+
+ Parameters
+ ----------
+ ne : int
+ Number of echoes.
+
+ Returns
+ -------
+ """
+ if not isinstance(ne, int):
+ raise IOError('Input ne must be int')
+ elif ne <= 0:
+ raise ValueError('Input ne must be greater than 0')
+
+ F05s = [None, 18.5, 10.1, 7.7, 6.6, 6.0, 5.6, 5.3, 5.1, 5.0]
+ F025s = [None, 38.5, 17.4, 12.2, 10, 8.8, 8.1, 7.6, 7.2, 6.9]
+ F01s = [None, 98.5, 34.1, 21.2, 16.2, 13.8, 12.2, 11.3, 10.7, 10.]
+ return F05s[ne], F025s[ne], F01s[ne]
+
+
+def eimask(dd, ees=None):
+ if ees==None: ees=range(dd.shape[1])
+ imask = np.zeros([dd.shape[0], len(ees)])
+ for ee in ees:
+ print(ee)
+ lthr = 0.001 * stats.scoreatpercentile(dd[:,ee,:].flatten(),98, interpolation_method='lower')
+ hthr = 5 * stats.scoreatpercentile(dd[:,ee,:].flatten(),98, interpolation_method='lower')
+ print(lthr,hthr)
+ imask[dd[:,ee,:].mean(1) > lthr,ee]=1
+ imask[dd[:,ee,:].mean(1) > hthr,ee]=0
+ return imask
+
+
+def split_ts(data,comptable,mmix, acc, rej, midk):
+ cbetas = get_coeffs(data-data.mean(-1)[:,:,:,np.newaxis],mask,mmix)
+ betas = fmask(cbetas,mask)
+ if len(acc)!=0:
+ hikts=unmask(betas[:,acc].dot(mmix.T[acc,:]),mask)
+ else:
+ hikts = None
+ return hikts,data-hikts
+
+
+def computefeats2(data,mmix,mask,normalize=True):
+ #Write feature versions of components
+ data = data[mask]
+ data_vn = (data-data.mean(axis=-1)[:,np.newaxis])/data.std(axis=-1)[:,np.newaxis]
+ data_R = get_coeffs(unmask(data_vn,mask),mask,mmix)[mask]
+ data_R[data_R<-.999] = -0.999
+ data_R[data_R>.999] = .999
+ data_Z = np.arctanh(data_R)
+ if len(data_Z.shape)==1: data_Z = np.atleast_2d(data_Z).T
+ if normalize:
+ #data_Z2 = ((data_Z.T-data_Z.mean(0)[:,np.newaxis])/data_Z.std(0)[:,np.newaxis]).T
+ data_Z = (((data_Z.T-data_Z.mean(0)[:,np.newaxis])/data_Z.std(0)[:,np.newaxis]) + (data_Z.mean(0)/data_Z.std(0))[:,np.newaxis]).T
+ return data_Z
+
+
+def ctabsel(ctabfile):
+ ctlines = open(ctabfile).readlines()
+ class_tags = ['#ACC','#REJ','#MID','#IGN']
+ class_dict = {}
+ for ii,ll in enumerate(ctlines):
+ for kk in class_tags:
+ if ll[:4]==kk and ll[4:].strip() != '':
+ class_dict[kk] = ll[4:].split('#')[0].split(',')
+ return tuple([np.array(class_dict[kk],dtype=int) for kk in class_tags])
+
+
+# def dwtmat(mmix):
+# print("++Wavelet transforming data")
+# llt = len(np.hstack(pywt.dwt(mmix[0],'db2')))
+# mmix_wt = np.zeros([mmix.shape[0],llt])
+# for ii in range(mmix_wt.shape[0]):
+# wtx = pywt.dwt(mmix[ii],'db2')
+# #print len(wtx[0]),len(wtx[1])
+# cAlen = len(wtx[0])
+# mmix_wt[ii] = np.hstack(wtx)
+# return mmix_wt,cAlen
+#
+#
+# def idwtmat(mmix_wt,cAl):
+# print("++Inverse wavelet transforming")
+# lt = len(pywt.idwt(mmix_wt[0,:cAl],mmix_wt[0,cAl:],'db2',correct_size=True))
+# mmix_iwt = np.zeros([mmix_wt.shape[0],lt])
+# for ii in range(mmix_iwt.shape[0]):
+# mmix_iwt[ii] = pywt.idwt(mmix_wt[ii,:cAl],mmix_wt[ii,cAl:],'db2',correct_size=True)
+# return mmix_iwt
+
+
+def fitmodels_direct(catd, mmix, mask,
+ t2s, t2sG, tes,
+ combmode, head, fout=None,
+ reindex=False,mmixN=None,
+ full_sel=True):
+ """
+ Usage:
+
+ fitmodels_direct(fout)
+
+ Input:
+ fout is flag for output of per-component TE-dependence maps
+ t2s is a (nx,ny,nz) ndarray
+ tes is a 1d array
+ """
+
+ #Compute opt. com. raw data
+ tsoc = np.array(optcom(catd, t2sG, tes,
+ mask, combmode, useG=True),
+ dtype=float)[mask]
+ tsoc_mean = tsoc.mean(axis=-1)
+ tsoc_dm = tsoc-tsoc_mean[:,np.newaxis]
+
+ #Compute un-normalized weight dataset (features)
+ if mmixN is None:
+ mmixN = mmix
+ #WTS = computefeats2(unmask(unmask(tsoc,mask)[t2s!=0],t2s!=0),mmixN,t2s!=0,normalize=False)
+ WTS = computefeats2(unmask(tsoc,mask),mmixN,mask,normalize=False)
+
+ #Compute PSC dataset - shouldn't have to refit data
+ global tsoc_B
+ tsoc_B = get_coeffs(unmask(tsoc_dm,mask),mask,mmix)[mask]
+ tsoc_Babs = np.abs(tsoc_B)
+ PSC = tsoc_B/tsoc.mean(axis=-1)[:,np.newaxis]*100
+
+ #Compute skews to determine signs based on unnormalized weights, correct mmix & WTS signs based on spatial distribution tails
+ from scipy.stats import skew
+ signs = skew(WTS,axis=0)
+ signs /= np.abs(signs)
+ mmix = mmix.copy()
+ mmix*=signs
+ WTS*=signs
+ PSC*=signs
+ totvar = (tsoc_B**2).sum()
+ totvar_norm = (WTS**2).sum()
+
+ #Compute Betas and means over TEs for TE-dependence analysis
+ Ne = len(tes)
+ betas = cat2echos(get_coeffs(uncat2echos(catd,Ne),np.tile(mask,(1,1,Ne)),mmix),Ne)
+ nx,ny,nz,Ne,nc = betas.shape
+ Nm = mask.sum()
+ NmD = (t2s!=0).sum()
+ mu = catd.mean(axis=-1)
+ tes = np.reshape(tes,(Ne,1))
+ fmin,fmid,fmax = getfbounds(Ne)
+
+ #Mask arrays
+ mumask = fmask(mu,t2s!=0)
+ #t2smask = fmask(t2s,mask)
+ t2smask = fmask(t2s,t2s!=0)
+ betamask = fmask(betas,t2s!=0)
+
+ #Setup Xmats
+ #Model 1
+ X1 = mumask.transpose()
+
+ #Model 2
+ X2 = np.tile(tes,(1,NmD))*mumask.transpose()/t2smask.transpose()
+
+ #Tables for component selection
+ global Kappas, Rhos, varex, varex_norm
+ global Z_maps, F_R2_maps, F_S0_maps
+ global Z_clmaps, F_R2_clmaps, F_S0_clmaps
+ global Br_clmaps_R2, Br_clmaps_S0
+ Kappas = np.zeros([nc])
+ Rhos = np.zeros([nc])
+ varex = np.zeros([nc])
+ varex_norm = np.zeros([nc])
+ Z_maps = np.zeros([Nm,nc])
+ F_R2_maps = np.zeros([NmD,nc])
+ F_S0_maps = np.zeros([NmD,nc])
+ Z_clmaps = np.zeros([Nm,nc])
+ F_R2_clmaps = np.zeros([NmD,nc])
+ F_S0_clmaps = np.zeros([NmD,nc])
+ Br_clmaps_R2 = np.zeros([Nm,nc])
+ Br_clmaps_S0 = np.zeros([Nm,nc])
+
+ for i in range(nc):
+
+ #size of B is (nc, nx*ny*nz)
+ B = np.atleast_3d(betamask)[:,:,i].transpose()
+ alpha = (np.abs(B)**2).sum(axis=0)
+ varex[i] = (tsoc_B[:,i]**2).sum()/totvar*100.
+ varex_norm[i] = (unmask(WTS,mask)[t2s!=0][:,i]**2).sum()/totvar_norm*100.
+
+ #S0 Model
+ coeffs_S0 = (B*X1).sum(axis=0)/(X1**2).sum(axis=0)
+ SSE_S0 = (B - X1*np.tile(coeffs_S0,(Ne,1)))**2
+ SSE_S0 = SSE_S0.sum(axis=0)
+ F_S0 = (alpha - SSE_S0)*2/(SSE_S0)
+ F_S0_maps[:,i] = F_S0
+
+ #R2 Model
+ coeffs_R2 = (B*X2).sum(axis=0)/(X2**2).sum(axis=0)
+ SSE_R2 = (B - X2*np.tile(coeffs_R2,(Ne,1)))**2
+ SSE_R2 = SSE_R2.sum(axis=0)
+ F_R2 = (alpha - SSE_R2)*2/(SSE_R2)
+ F_R2_maps[:,i] = F_R2
+
+ #Compute weights as Z-values
+ wtsZ=(WTS[:,i]-WTS[:,i].mean())/WTS[:,i].std()
+ wtsZ[np.abs(wtsZ)>Z_MAX]=(Z_MAX*(np.abs(wtsZ)/wtsZ))[np.abs(wtsZ)>Z_MAX]
+ Z_maps[:,i] = wtsZ
+
+ #Compute Kappa and Rho
+ F_S0[F_S0>F_MAX] = F_MAX
+ F_R2[F_R2>F_MAX] = F_MAX
+ Kappas[i] = np.average(F_R2,weights=np.abs(np.squeeze(unmask(wtsZ,mask)[t2s!=0]**2.)))
+ Rhos[i] = np.average(F_S0,weights=np.abs(np.squeeze(unmask(wtsZ,mask)[t2s!=0]**2.)))
+
+ #Tabulate component values
+ comptab_pre = np.vstack([np.arange(nc),Kappas,Rhos,varex,varex_norm]).T
+ if reindex:
+ #Re-index all components in Kappa order
+ comptab = comptab_pre[comptab_pre[:,1].argsort()[::-1],:]
+ Kappas = comptab[:,1]; Rhos = comptab[:,2]; varex = comptab[:,3]; varex_norm = comptab[:,4]
+ nnc = np.array(comptab[:,0],dtype=np.int)
+ mmix_new = mmix[:,nnc]
+ F_S0_maps = F_S0_maps[:,nnc]; F_R2_maps = F_R2_maps[:,nnc]; Z_maps = Z_maps[:,nnc]
+ WTS = WTS[:,nnc]; PSC=PSC[:,nnc]; tsoc_B=tsoc_B[:,nnc]; tsoc_Babs=tsoc_Babs[:,nnc]
+ comptab[:,0] = np.arange(comptab.shape[0])
+ else:
+ comptab = comptab_pre
+ mmix_new = mmix
+
+ #Full selection including clustering criteria
+ seldict=None
+ if full_sel:
+ for i in range(nc):
+
+ #Save out files
+ out = np.zeros((nx,ny,nz,4))
+ if fout!=None:
+ ccname = "cc%.3d.nii" % i
+ else: ccname = ".cc_temp.nii.gz"
+
+ out[:,:,:,0] = np.squeeze(unmask(PSC[:,i],mask))
+ out[:,:,:,1] = np.squeeze(unmask(F_R2_maps[:,i],t2s!=0))
+ out[:,:,:,2] = np.squeeze(unmask(F_S0_maps[:,i],t2s!=0))
+ out[:,:,:,3] = np.squeeze(unmask(Z_maps[:,i],mask))
+
+ niwrite(out,fout,ccname,head)
+ os.system('3drefit -sublabel 0 PSC -sublabel 1 F_R2 -sublabel 2 F_SO -sublabel 3 Z_sn %s 2> /dev/null > /dev/null'%ccname)
+
+ csize = np.max([int(Nm*0.0005)+5,20])
+ #csize = 10
+
+ #Do simple clustering on F
+ os.system("3dcalc -overwrite -a %s[1..2] -expr 'a*step(a-%i)' -prefix .fcl_in.nii.gz -overwrite" % (ccname,fmin))
+ os.system('3dmerge -overwrite -dxyz=1 -1clust 1 %i -doall -prefix .fcl_out.nii.gz .fcl_in.nii.gz' % (csize))
+ sel = fmask(nib.load('.fcl_out.nii.gz').get_data(),t2s!=0)!=0
+ sel = np.array(sel,dtype=np.int)
+ F_R2_clmaps[:,i] = sel[:,0]
+ F_S0_clmaps[:,i] = sel[:,1]
+
+ #Do simple clustering on Z at p<0.05
+ sel = spatclust(None,mask,csize,1.95,head,aff,infile=ccname,dindex=3,tindex=3)
+ Z_clmaps[:,i] = sel
+
+ #Do simple clustering on ranked signal-change map
+ countsigFR2 = F_R2_clmaps[:,i].sum()
+ countsigFS0 = F_S0_clmaps[:,i].sum()
+ Br_clmaps_R2[:,i] = spatclust(rankvec(tsoc_Babs[:,i]),mask,csize,max(tsoc_Babs.shape)-countsigFR2,head,aff)
+ Br_clmaps_S0[:,i] = spatclust(rankvec(tsoc_Babs[:,i]),mask,csize,max(tsoc_Babs.shape)-countsigFS0,head,aff)
+
+ seldict = {}
+ selvars = ['Kappas','Rhos','WTS','varex','Z_maps','F_R2_maps','F_S0_maps',\
+ 'Z_clmaps','F_R2_clmaps','F_S0_clmaps','tsoc_B','Br_clmaps_R2','Br_clmaps_S0','PSC']
+ for vv in selvars:
+ seldict[vv] = eval(vv)
+
+ return seldict,comptab,betas,mmix_new
+
+
+def selcomps(seldict, mmix, head, debug=False, olevel=2, oversion=99, knobargs='',
+ filecsdata=False, savecsdiag=True, group0_only=False,
+ strict_mode=False):
+
+ import numpy.fft as fft
+ from sklearn.cluster import DBSCAN
+
+ """
+ Set knobs
+ """
+ if knobargs is not '':
+ knobs = vars(knobargs)
+ locals().update(knobs)
+
+ try:
+ if filecsdata: filecsdata=True
+ except:
+ pass
+
+ if filecsdata:
+ import bz2
+ if seldict is not None:
+ print("Saving component selection data")
+ csstate_f = bz2.BZ2File('compseldata.pklbz','wb')
+ pickle.dump(seldict,csstate_f)
+ csstate_f.close()
+ else:
+ try:
+ csstate_f = bz2.BZ2File('compseldata.pklbz','rb')
+ seldict = pickle.load(csstate_f)
+ csstate_f.close()
+ except:
+ print("No component data found!")
+ return None
+
+ #Dump dictionary into variable names
+ for key in seldict.keys(): exec("%s=seldict['%s']" % (key,key))
+
+ #List of components
+ midk = []
+ ign = []
+ nc = np.arange(len(Kappas))
+ ncl = np.arange(len(Kappas))
+
+ #If user has specified components to accept manually
+ try:
+ if manacc:
+ acc = sorted([int(vv) for vv in manacc.split(',')])
+ midk = []
+ rej = sorted(np.setdiff1d(ncl,acc))
+ return acc,rej,midk,[] #Add string for ign
+ except:
+ pass
+
+ """
+ Do some tallies for no. of significant voxels
+ """
+ countsigZ = Z_clmaps.sum(0)
+ countsigFS0 = F_S0_clmaps.sum(0)
+ countsigFR2 = F_R2_clmaps.sum(0)
+ countnoise = np.zeros(len(nc))
+
+ """
+ Make table of dice values
+ """
+ dice_table = np.zeros([nc.shape[0],2])
+ csize = np.max([int(mask.sum()*0.0005)+5,20])
+ for ii in ncl:
+ dice_FR2 = dice(unmask(Br_clmaps_R2[:,ii],mask)[t2s!=0],F_R2_clmaps[:,ii])
+ dice_FS0 = dice(unmask(Br_clmaps_S0[:,ii],mask)[t2s!=0],F_S0_clmaps[:,ii])
+ dice_table[ii,:] = [dice_FR2,dice_FS0] #step 3a here and above
+ dice_table[np.isnan(dice_table)]=0
+
+ """
+ Make table of noise gain
+ """
+ tt_table = np.zeros([len(nc),4])
+ counts_FR2_Z = np.zeros([len(nc),2])
+ for ii in nc:
+ comp_noise_sel = andb([np.abs(Z_maps[:,ii])>1.95,Z_clmaps[:,ii]==0])==2
+ countnoise[ii] = np.array(comp_noise_sel,dtype=np.int).sum()
+ noise_FR2_Z = np.log10(np.unique(F_R2_maps[unmask(comp_noise_sel,mask)[t2s!=0],ii]))
+ signal_FR2_Z = np.log10(np.unique(F_R2_maps[unmask(Z_clmaps[:,ii],mask)[t2s!=0]==1,ii]))
+ counts_FR2_Z[ii,:] = [len(signal_FR2_Z),len(noise_FR2_Z)]
+ try:
+ ttest = stats.ttest_ind(signal_FR2_Z,noise_FR2_Z,equal_var=True)
+ mwu = stats.norm.ppf(stats.mannwhitneyu(signal_FR2_Z,noise_FR2_Z)[1])
+ tt_table[ii,0] = np.abs(mwu)*ttest[0]/np.abs(ttest[0])
+ tt_table[ii,1] = ttest[1]
+ except: pass
+ tt_table[np.isnan(tt_table)]=0
+ tt_table[np.isinf(tt_table[:,0]),0]=np.percentile(tt_table[~np.isinf(tt_table[:,0]),0],98)
+
+ #Time series derivative kurtosis
+ mmix_dt = (mmix[:-1]-mmix[1:])
+ mmix_kurt = stats.kurtosis(mmix_dt)
+ mmix_std = np.std(mmix_dt,0)
+
+ """
+ Step 1: Reject anything that's obviously an artifact
+ a. Estimate a null variance
+ """
+ rej = ncl[andb([Rhos>Kappas,countsigFS0>countsigFR2])>0]
+ ncl = np.setdiff1d(ncl,rej)
+
+ """
+ Step 2: Compute 3-D spatial FFT of Beta maps to detect high-spatial frequency artifacts
+ """
+ fproj_arr = np.zeros([np.prod(mask.shape[0:2]),len(nc)])
+ fproj_arr_val = np.zeros([np.prod(mask.shape[0:2]),len(nc)])
+ spr = []
+ fdist = []
+ for ii in nc:
+ fproj = np.fft.fftshift(np.abs(np.fft.rfftn(unmask(seldict['PSC'],mask)[:,:,:,ii])))
+ fproj_z = fproj.max(2)
+ fproj[fproj==fproj.max()] = 0
+ fproj_arr[:,ii] = rankvec(fproj_z.flatten())
+ fproj_arr_val[:,ii] = fproj_z.flatten()
+ spr.append(np.array(fproj_z>fproj_z.max()/4,dtype=np.int).sum())
+ fprojr = np.array([fproj,fproj[:,:,::-1]]).max(0)
+ fdist.append(np.max([ fitgaussian(fproj.max(jj))[3:].max() for jj in range(len(fprojr.shape)) ]))
+ fdist = np.array(fdist)
+ spr = np.array(spr)
+
+ """
+ Step 3: Create feature space of component properties
+ """
+ fdist_pre = fdist.copy()
+ fdist_pre[fdist>np.median(fdist)*3] = np.median(fdist)*3
+ fdist_z = (fdist_pre - np.median(fdist_pre) ) / fdist_pre.std()
+ spz = (spr-spr.mean())/spr.std()
+ Tz = (tt_table[:,0]-tt_table[:,0].mean())/tt_table[:,0].std()
+ varex_ = np.log(varex)
+ Vz = (varex_-varex_.mean())/varex_.std()
+ Kz = (Kappas-Kappas.mean())/Kappas.std()
+ Rz = (Rhos-Rhos.mean())/Rhos.std()
+ Ktz = np.log(Kappas)/2
+ Ktz = (Ktz-Ktz.mean())/Ktz.std()
+ Rtz = np.log(Rhos)/2
+ Rtz = (Rtz-Rtz.mean())/Rtz.std()
+ KRr = stats.zscore(np.log(Kappas)/np.log(Rhos))
+ cnz = (countnoise-countnoise.mean())/countnoise.std()
+ Dz = stats.zscore(np.arctanh(dice_table[:,0]+0.001))
+ fz = np.array([Tz,Vz,Ktz,KRr,cnz,Rz,mmix_kurt,fdist_z])
+
+ """
+ Step 3: Make initial guess of where BOLD components are and use DBSCAN to exclude noise components and find a sample set of 'good' components
+ """
+ #epsmap is [index,level of overlap with dicemask,number of high Rho components]
+ F05,F025,F01 = getfbounds(ne)
+ epsmap = []
+ Rhos_sorted = np.array(sorted(Rhos))[::-1]
+ #Make an initial guess as to number of good components based on consensus of control points across Rhos and Kappas
+ KRcutguesses = [getelbow_mod(Rhos),getelbow_cons(Rhos),getelbow_aggr(Rhos),getelbow_mod(Kappas),getelbow_cons(Kappas),getelbow_aggr(Kappas)]
+ Kelbowval = np.median([getelbow_mod(Kappas,val=True),getelbow_cons(Kappas,val=True),getelbow_aggr(Kappas,val=True)]+list(getfbounds(ne)))
+ Khighelbowval = stats.scoreatpercentile([getelbow_mod(Kappas,val=True),getelbow_cons(Kappas,val=True),getelbow_aggr(Kappas,val=True)]+list(getfbounds(ne)),75, interpolation_method='lower')
+ KRcut = np.median(KRcutguesses)
+ #only use exclusive when inclusive is extremely inclusive - double KRcut
+ if getelbow_cons(Kappas) > KRcut*2 and getelbow_mod(Kappas,val=True) KRcut*2 : Rcut = getelbow_mod(Rhos,val=True) #consider something like min([getelbow_mod(Rhos,True),sorted(Rhos)[::-1][KRguess] ])
+ else: Rcut = getelbow_cons(Rhos,val=True)
+ if Rcut > Kcut: Kcut = Rcut #Rcut should never be higher than Kcut
+ KRelbow = andb([Kappas>Kcut,Rhos0,0],75, interpolation_method='lower')/3
+ KRguess = np.setdiff1d(np.setdiff1d(nc[KRelbow==2],rej),np.union1d(nc[tt_table[:,0]1],nc[Vz>2]),nc[andb([varex>0.5*sorted(varex)[::-1][int(KRcut)],Kappas<2*Kcut])==2])))
+ guessmask = np.zeros(len(nc))
+ guessmask[KRguess] = 1
+
+ #Throw lower-risk bad components out
+ rejB = ncl[andb([tt_table[ncl,0]<0,varex[ncl]>np.median(varex),ncl > KRcut])==3]
+ rej = np.union1d(rej,rejB)
+ ncl = np.setdiff1d(ncl,rej)
+
+ for ii in range(20000):
+ db = DBSCAN(eps=.005+ii*.005, min_samples=3).fit(fz.T)
+ if db.labels_.max() > 1 and db.labels_.max() < len(nc)/6 and np.intersect1d(rej,nc[db.labels_==0]).shape[0]==0 and np.array(db.labels_==-1,dtype=int).sum()/float(len(nc))<.5:
+ epsmap.append([ii, dice(guessmask,db.labels_==0),np.intersect1d(nc[db.labels_==0],nc[Rhos>getelbow_mod(Rhos_sorted,val=True)]).shape[0] ])
+ if debug: print("found solution", ii, db.labels_)
+ db = None
+
+ epsmap = np.array(epsmap)
+ group0 = []
+ dbscanfailed=False
+ if len(epsmap)!=0 :
+ #Select index that maximizes Dice with guessmask but first minimizes number of higher Rho components
+ ii = int(epsmap[np.argmax(epsmap[epsmap[:,2]==np.min(epsmap[:,2]),1],0),0])
+ print('Component selection tuning: ' , epsmap[:,1].max())
+ db = DBSCAN(eps=.005+ii*.005, min_samples=3).fit(fz.T)
+ ncl = nc[db.labels_==0]
+ ncl = np.setdiff1d(ncl,rej)
+ ncl = np.setdiff1d(ncl,ncl[ncl>len(nc)-len(rej)])
+ group0 = ncl.copy()
+ group_n1 = nc[db.labels_==-1]
+ to_clf = np.setdiff1d(nc,np.union1d(ncl,rej))
+ if len(group0)==0 or len(group0) < len(KRguess)*.5:
+ dbscanfailed=True
+ print("DBSCAN based guess failed. Using elbow guess method.")
+ ncl = np.setdiff1d(np.setdiff1d(nc[KRelbow==2],rej),np.union1d(nc[tt_table[:,0]1],nc[Vz>2]),nc[andb([varex>0.5*sorted(varex)[::-1][int(KRcut)],Kappas<2*Kcut])==2])))
+ group0 = ncl.copy()
+ group_n1 = []
+ to_clf = np.setdiff1d(nc,np.union1d(group0,rej))
+ if len(group0)<2 or (len(group0)<4 and float(len(rej))/len(group0)>3):
+ print("WARNING: Extremely limited reliable BOLD signal space. Not filtering further into midk etc.")
+ midkfailed = True
+ min_acc = np.array([])
+ if len(group0)!=0:
+ toacc_hi = np.setdiff1d(nc [andb([ fdist <= np.max(fdist[group0]), Rhos-2 ])==3 ],np.union1d(group0,rej)) #For extremes, building in a 20% tolerance
+ min_acc = np.union1d(group0,toacc_hi)
+ to_clf = np.setdiff1d(nc , np.union1d(min_acc,rej) )
+ diagstepkeys=['rej','KRcut','Kcut','Rcut','dbscanfailed','midkfailed','KRguess','group0','min_acc','toacc_hi']
+ diagstepout=[]
+ for ddk in diagstepkeys: diagstepout.append("%s: %s" % (ddk,eval('str(%s)' % ddk) ) )
+ with open('csstepdata.txt','w') as ofh:
+ ofh.write('\n'.join(diagstepout))
+ ofh.close()
+ return list(sorted(min_acc)),list(sorted(rej)),[],list(sorted(to_clf))
+
+ if group0_only: return list(sorted(group0)),list(sorted(rej)),[],list(sorted(to_clf))
+
+ #Find additional components to reject based on Dice - doing this here since Dice is a little unstable, need to reference group0
+ rej_supp = []
+ dice_rej = False
+ if not dbscanfailed and len(rej)+len(group0)<0.75*len(nc):
+ dice_rej = True
+ rej_supp = np.setdiff1d(np.setdiff1d(np.union1d(rej,nc[dice_table[nc,0]<=dice_table[nc,1]] ),group0),group_n1)
+ rej = np.union1d(rej,rej_supp)
+
+ #Temporal features
+ mmix_kurt_z = (mmix_kurt-mmix_kurt[group0].mean())/mmix_kurt[group0].std() #larger is worse - spike
+ mmix_std_z = -1*((mmix_std-mmix_std[group0].mean())/mmix_std[group0].std()) #smaller is worse - drift
+ mmix_kurt_z_max = np.max([mmix_kurt_z,mmix_std_z],0)
+
+ """
+ Step 2: Classifiy midk and ignore using separte SVMs for difference variance regimes
+ #To render hyperplane:
+ min_x = np.min(spz2);max_x=np.max(spz2)
+ # plotting separating hyperplane
+ ww = clf_.coef_[0]
+ aa = -ww[0] / ww[1]
+ xx = np.linspace(min_x - 2, max_x + 2) # make sure the line is long enough
+ yy = aa * xx - (clf_.intercept_[0]) / ww[1]
+ plt.plot(xx, yy, '-')
+ """
+
+ toacc_hi = np.setdiff1d(nc [andb([ fdist <= np.max(fdist[group0]), Rhos-2 ])==3 ],np.union1d(group0,rej)) #Tried getting rid of accepting based on SVM altogether, now using only rejecting
+ toacc_lo = np.intersect1d(to_clf,nc[andb([spz<1,Rz<0,mmix_kurt_z_max<5,Dz>-1,Tz>-1,Vz<0,Kappas>=F025,fdist<3*np.percentile(fdist[group0],98)])==8])
+ midk_clf,clf_ = do_svm(fproj_arr_val[:, np.union1d(group0, rej)].T,
+ [0] * len(group0) + [1] * len(rej),
+ fproj_arr_val[:, to_clf].T,
+ svmtype=2)
+ midk = np.setdiff1d(to_clf[andb([midk_clf==1,varex[to_clf]>np.median(varex[group0]) ])==2],np.union1d(toacc_hi,toacc_lo))
+ if len(np.intersect1d(to_clf[andb([midk_clf==1,Vz[to_clf]>0 ])==2],toacc_hi))==0:
+ svm_acc_fail = True
+ toacc_hi = np.union1d(toacc_hi,to_clf[midk_clf==0]) #only use SVM to augment toacc_hi only if toacc_hi isn't already conflicting with SVM choice
+ else: svm_acc_fail = False
+
+ """
+ Step 3: Compute variance associated with low T2* areas (e.g. draining veins and low T2* areas)
+ #To write out veinmask
+ veinout = np.zeros(t2s.shape)
+ veinout[t2s!=0] = veinmaskf
+ niwrite(veinout,aff,'veinmaskf.nii',head)
+ veinBout = unmask(veinmaskB,mask)
+ niwrite(veinBout,aff,'veins50.nii',head)
+ """
+
+ tsoc_B_Zcl = np.zeros(tsoc_B.shape)
+ tsoc_B_Zcl[Z_clmaps!=0] = np.abs(tsoc_B)[Z_clmaps!=0]
+ sig_B = [ stats.scoreatpercentile(tsoc_B_Zcl[tsoc_B_Zcl[:,ii]!=0,ii],25) if len(tsoc_B_Zcl[tsoc_B_Zcl[:,ii]!=0,ii]) != 0 else 0 for ii in nc ]
+ sig_B = np.abs(tsoc_B)>np.tile(sig_B,[tsoc_B.shape[0],1])
+
+ veinmask = andb([t2s np.array(veinmaskf,dtype=int).sum()] =0
+ t2s_lim = [stats.scoreatpercentile(t2s[t2s!=0],50, interpolation_method='lower'),
+ stats.scoreatpercentile(t2s[t2s!=0],80, interpolation_method='lower')/2]
+ phys_var_zs = []
+ for t2sl_i in range(len(t2s_lim)):
+ t2sl = t2s_lim[t2sl_i]
+ veinW = sig_B[:,veinc]*np.tile(rej_veinRZ,[sig_B.shape[0],1])
+ veincand = fmask(unmask(andb([s0[t2s!=0]=1,t2s!=0),mask)
+ veinW[~veincand]=0
+ invein = veinW.sum(1)[fmask(unmask(veinmaskf,mask)*unmask(veinW.sum(1)>1,mask),mask)]
+ minW = 10*(np.log10(invein).mean())-1*10**(np.log10(invein).std())
+ veinmaskB = veinW.sum(1)>minW
+ tsoc_Bp = tsoc_B.copy()
+ tsoc_Bp[tsoc_Bp<0]=0
+ sig_Bp = sig_B*tsoc_Bp>0
+ vvex = np.array([(tsoc_Bp[veinmaskB,ii]**2.).sum()/(tsoc_Bp[:,ii]**2.).sum() for ii in nc])
+ group0_res = np.intersect1d(KRguess,group0)
+ phys_var_zs.append( (vvex-vvex[group0_res].mean())/vvex[group0_res].std() )
+ veinBout = unmask(veinmaskB,mask)
+ niwrite(veinBout,aff,'veins_l%i.nii' % t2sl_i,head)
+ #Mask to sample veins
+ phys_var_z = np.array(phys_var_zs).max(0)
+ Vz2 = (varex_ - varex_[group0].mean())/varex_[group0].std()
+
+ """
+ Step 4: Learn joint TE-dependence spatial and temporal models to move remaining artifacts to ignore class
+ """
+
+ to_ign = []
+
+ minK_ign = np.max([F05,getelbow_cons(Kappas,val=True)])
+ newcest = len(group0)+len(toacc_hi[ Kappas[toacc_hi]>minK_ign ])
+ phys_art = np.setdiff1d(nc[andb([phys_var_z>3.5,Kappas2,rankvec(phys_var_z)-rankvec(Kappas)>newcest/2,Vz2>-1])==3],group0),phys_art)
+ #Want to replace field_art with an acf/SVM based approach instead of a kurtosis/filter one
+ field_art = np.setdiff1d(nc[andb([mmix_kurt_z_max>5,Kappas2,rankvec(mmix_kurt_z_max)-rankvec(Kappas)>newcest/2,Vz2>1,Kappas3,Vz2>3,Rhos>np.percentile(Rhos[group0],75) ])==3],group0),field_art)
+ field_art = np.union1d(np.setdiff1d(nc[andb([mmix_kurt_z_max>5,Vz2>5 ])==2],group0),field_art)
+ misc_art = np.setdiff1d(nc[andb([(rankvec(Vz)-rankvec(Ktz))>newcest/2,KappasF05,RhosRhos,Vz3<=-1,Vz3>-3,mmix_kurt_z_max<2.5])==6]))
+ ign = np.setdiff1d(nc,list(ncl)+list(midk)+list(rej))
+ orphan = np.setdiff1d(nc,list(ncl)+list(to_ign)+list(midk)+list(rej))
+
+ if savecsdiag:
+ diagstepkeys=['rej','KRcut','Kcut','Rcut','dbscanfailed','KRguess','group0','dice_rej','rej_supp','to_clf','midk', 'svm_acc_fail', 'toacc_hi','toacc_lo','field_art','phys_art','misc_art','ncl','ign']
+ diagstepout=[]
+ for ddk in diagstepkeys: diagstepout.append("%s: %s" % (ddk,eval('str(%s)' % ddk) ) )
+ with open('csstepdata.txt','w') as ofh:
+ ofh.write('\n'.join(diagstepout))
+ allfz = np.array([Tz,Vz,Ktz,KRr,cnz,Rz,mmix_kurt,fdist_z])
+ np.savetxt('csdata.txt',allfz)
+
+ return list(sorted(ncl)),list(sorted(rej)),list(sorted(midk)),list(sorted(ign))
+
+
+def tedpca(combmode, mask, stabilize, head, ste=0, mlepca=True):
+ nx,ny,nz,ne,nt = catd.shape
+ ste = np.array([int(ee) for ee in str(ste).split(',')])
+ cAl = None
+ if len(ste) == 1 and ste[0]==-1:
+ print("-Computing PCA of optimally combined multi-echo data")
+ OCmask = make_mask(OCcatd[:,:,:,np.newaxis,:])
+ d = fmask(OCcatd,OCmask)
+ eim = eimask(d[:,np.newaxis,:])
+ eim = eim[:,0]==1
+ d = d[eim,:]
+ elif len(ste) == 1 and ste[0]==0:
+ print("-Computing PCA of spatially concatenated multi-echo data")
+ ste = np.arange(ne)
+ d = np.float64(fmask(catd,mask))
+ eim = eimask(d)==1
+ d = d[eim]
+ else:
+ print("-Computing PCA of TE #%s" % ','.join([str(ee) for ee in ste]))
+ d = np.float64(np.concatenate([fmask(catd[:,:,:,ee,:],mask)[:,np.newaxis,:] for ee in ste-1],axis=1))
+ eim = eimask(d)==1
+ eim = np.squeeze(eim)
+ d = np.squeeze(d[eim])
+
+ dz = ((d.T-d.T.mean(0))/d.T.std(0)).T #Variance normalize timeseries
+ dz = (dz-dz.mean())/dz.std() #Variance normalize everything
+
+ #if wvpca:
+ # print "++Transforming time series from time domain to wavelet domain."
+ # dz,cAl = dwtmat(dz)
+
+ pcastate_fn = 'pcastate.pklgz'
+
+ if not os.path.exists(pcastate_fn):
+ ##Do PC dimension selection
+ #Get eigenvalue cutoff
+
+ if mlepca:
+ from sklearn.decomposition import PCA
+ ppca = PCA(n_components='mle',svd_solver='full')
+ #ppca = PCA(n_components='mle')
+ ppca.fit(dz)
+ v = ppca.components_
+ s = ppca.explained_variance_
+ u = np.dot(np.dot(dz,v.T),np.diag(1./s))
+ else:
+ u,s,v = np.linalg.svd(dz,full_matrices=0)
+
+ sp = s/s.sum()
+ eigelb = sp[getelbow_mod(sp)]
+
+ spdif = np.abs(sp[1:]-sp[:-1])
+ spdifh = spdif[(spdif.shape[0]//2):]
+ spdmin = spdif.min()
+ spdthr = np.mean([spdifh.max(),spdmin])
+ spmin = sp[(spdif.shape[0]//2)+(np.arange(spdifh.shape[0])[spdifh>=spdthr][0])+1]
+ spcum = []
+ spcumv = 0
+ for sss in sp:
+ spcumv+=sss
+ spcum.append(spcumv)
+ spcum = np.array(spcum)
+
+ #Compute K and Rho for PCA comps
+
+ eimum = np.atleast_2d(eim)
+ eimum = np.transpose(eimum,np.argsort(np.atleast_2d(eim).shape)[::-1])
+ eimum = np.array(np.squeeze(unmask(eimum.prod(1),mask)),dtype=np.bool)
+ vTmix = v.T
+ vTmixN =((vTmix.T-vTmix.T.mean(0))/vTmix.T.std(0)).T
+ #ctb,KRd,betasv,v_T = fitmodels2(catd,v.T,eimum,t2s,tes,mmixN=vTmixN)
+ none,ctb,betasv,v_T = fitmodels_direct(catd,v.T,eimum,t2s,t2sG,tes,combmode,head,mmixN=vTmixN,full_sel=False)
+ ctb = ctb[ctb[:,0].argsort(),:]
+ ctb = np.vstack([ctb.T[0:3],sp]).T
+
+ #Save state
+ print("Saving PCA")
+ pcastate = {'u':u,'s':s,'v':v,'ctb':ctb,'eigelb':eigelb,'spmin':spmin,'spcum':spcum}
+ try:
+ pcastate_f = gzip.open(pcastate_fn,'wb')
+ pickle.dump(pcastate,pcastate_f)
+ pcastate_f.close()
+ except:
+ print("Could not save PCA solution!")
+
+ else:
+ print("Loading PCA")
+ pcastate_f = gzip.open(pcastate_fn,'rb')
+ pcastate = pickle.load(pcastate_f)
+ (u, s, v, ctb,
+ eigelb, spmin, spcum) = (pcastate['u'], pcastate['s'], pcastate['v'],
+ pcastate['ctb'], pcastate['eigelb'],
+ pcastate['spmin'], pcastate['spcum'])
+
+ np.savetxt('comp_table_pca.txt',ctb[ctb[:,1].argsort(),:][::-1])
+ np.savetxt('mepca_mix.1D',v[ctb[:,1].argsort()[::-1],:].T)
+
+ kappas = ctb[ctb[:,1].argsort(),1]
+ rhos = ctb[ctb[:,2].argsort(),2]
+ fmin,fmid,fmax = getfbounds(ne)
+ kappa_thr = np.average(sorted([fmin,getelbow_mod(kappas,val=True)/2,fmid]),weights=[kdaw,1,1])
+ rho_thr = np.average(sorted([fmin,getelbow_cons(rhos,val=True)/2,fmid]),weights=[rdaw,1,1])
+ if int(kdaw)==-1:
+ kappas_lim = kappas[andb([kappasfmin])==2]
+ #kappas_lim = kappas[andb([kappasfmin])==2]
+ kappa_thr = kappas_lim[getelbow_mod(kappas_lim)]
+ rhos_lim = rhos[andb([rhosfmin])==2]
+ rho_thr = rhos_lim[getelbow_mod(rhos_lim)]
+ stabilize = True
+ if int(kdaw)!=-1 and int(rdaw)==-1:
+ rhos_lim = rhos[andb([rhosfmin])==2]
+ rho_thr = rhos_lim[getelbow_mod(rhos_lim)]
+
+ temp1 = np.array(ctb[:, 1] > kappa_thr, dtype=np.int)
+ temp2 = np.array(ctb[:, 2] > rho_thr, dtype=np.int)
+ temp3 = np.array(ctb[:, 3] > eigelb, dtype=np.int)
+ temp4 = np.array(ctb[:, 3] > spmin, dtype=np.int)
+ temp5 = np.array(ctb[:, 1] != F_MAX, dtype=np.int)
+ temp6 = np.array(ctb[:, 2] != F_MAX, dtype=np.int)
+ pcscore = (temp1 + temp2 + temp3) * temp4 * temp5 * temp6
+ if stabilize:
+ temp7 = np.array(spcum < 0.95, dtype=np.int)
+ temp8 = np.array(ctb[:, 2] > fmin, dtype=np.int)
+ temp9 = np.array(ctb[:, 1] > fmin, dtype=np.int)
+ pcscore = pcscore * temp7 * temp8 * temp9
+
+ pcsel = pcscore > 0
+ pcrej = np.array(pcscore==0,dtype=np.int)*np.array(ctb[:,3]>spmin,dtype=np.int) > 0
+
+ dd = u.dot(np.diag(s*np.array(pcsel,dtype=np.int))).dot(v)
+
+ #if wvpca:
+ # print "++Transforming PCA solution from wavelet domain to time domain"
+ # dd = idwtmat(dd,cAl)
+
+ nc = s[pcsel].shape[0]
+ print(pcsel)
+ print("--Selected %i components. Minimum Kappa=%0.2f Rho=%0.2f" % (nc,kappa_thr,rho_thr))
+
+ dd = ((dd.T-dd.T.mean(0))/dd.T.std(0)).T # Variance normalize timeseries
+ dd = (dd-dd.mean())/dd.std() # Variance normalize everything
+
+ return nc,dd
+
+
+def tedica(nc, dd, conv, fixed_seed, cost, final_cost):
+ """
+ Input is dimensionally reduced spatially concatenated multi-echo time series dataset from tedpca()
+ Output is comptable, mmix, smaps from ICA, and betas from fitting catd to mmix
+ """
+ #Do ICA
+ import mdp
+ climit = float("%s" % conv)
+ #icanode = mdp.nodes.FastICANode(white_comp=nc, white_parm={'svd':True},approach='symm', g=cost, fine_g=options.finalcost, limit=climit, verbose=True)
+ mdp.numx_rand.seed(fixed_seed)
+ icanode = mdp.nodes.FastICANode(white_comp=nc,approach='symm', g=cost, fine_g=final_cost, coarse_limit=climit*100, limit=climit, verbose=True)
+ icanode.train(dd)
+ smaps = icanode.execute(dd)
+ mmix = icanode.get_recmatrix().T
+ mmix = (mmix-mmix.mean(0))/mmix.std(0)
+ return mmix
+
+
+def gscontrol_raw(OCcatd,head,dtrank=4):
+ """
+ This function uses the spatial global signal estimation approach to modify catd (global variable) to
+ removal global signal out of individual echo time series datasets. The spatial global signal is estimated
+ from the optimally combined data after detrending with a Legendre polynomial basis of order=0 and degree=dtrank.
+ """
+
+ print("++ Applying amplitude-based T1 equilibration correction")
+
+ #Legendre polynomial basis for denoising
+ from scipy.special import lpmv
+ Lmix = np.array([lpmv(0,vv,np.linspace(-1,1,OCcatd.shape[-1])) for vv in range(dtrank)]).T
+
+ #Compute mean, std, mask local to this function - inefficient, but makes this function a bit more modular
+ Gmu = OCcatd.mean(-1)
+ Gstd = OCcatd.std(-1)
+ Gmask = Gmu!=0
+
+ #Find spatial global signal
+ dat = OCcatd[Gmask] - Gmu[Gmask][:,np.newaxis]
+ sol = np.linalg.lstsq(Lmix,dat.T) #Legendre basis for detrending
+ detr = dat - np.dot(sol[0].T,Lmix.T)[0]
+ sphis = (detr).min(1)
+ sphis -= sphis.mean()
+ niwrite(unmask(sphis,Gmask),aff,'T1gs.nii',head)
+
+ #Find time course of the spatial global signal, make basis with the Legendre basis
+ glsig = np.linalg.lstsq(np.atleast_2d(sphis).T,dat)[0]
+ glsig = (glsig-glsig.mean() ) / glsig.std()
+ np.savetxt('glsig.1D',glsig)
+ glbase = np.hstack([Lmix,glsig.T])
+
+ #Project global signal out of optimally combined data
+ sol = np.linalg.lstsq(np.atleast_2d(glbase),dat.T)
+ tsoc_nogs = dat - np.dot(np.atleast_2d(sol[0][dtrank]).T,np.atleast_2d(glbase.T[dtrank])) + Gmu[Gmask][:,np.newaxis]
+
+ # global OCcatd, sphis_raw
+ sphis_raw = sphis
+ niwrite(OCcatd,aff,'tsoc_orig.nii',head)
+ OCcatd = unmask(tsoc_nogs,Gmask)
+ niwrite(OCcatd,aff,'tsoc_nogs.nii',head)
+
+ #Project glbase out of each echo
+ # for ii in range(Ne):
+ # dat = catd[:,:,:,ii,:][Gmask]
+ # sol = np.linalg.lstsq(np.atleast_2d(glbase),dat.T)
+ # e_nogs = dat - np.dot(np.atleast_2d(sol[0][dtrank]).T,np.atleast_2d(glbase.T[dtrank]))
+ # catd[:,:,:,ii,:] = unmask(e_nogs,Gmask)
+
+
+def gscontrol_mmix(mmix, acc, rej, midk, empty, head):
+
+ Gmu = OCcatd.mean(-1)
+ Gstd = OCcatd.std(-1)
+ Gmask = Gmu!=0
+
+ """
+ Compute temporal regression
+ """
+ dat = (OCcatd[Gmask] - Gmu[Gmask][:,np.newaxis]) / Gstd[mask][:,np.newaxis]
+ solG = np.linalg.lstsq(mmix,dat.T)
+ resid = dat - np.dot(solG[0].T,mmix.T)
+
+ """
+ Build BOLD time series without amplitudes, and save T1-like effect
+ """
+ bold_ts = np.dot(solG[0].T[:,acc],mmix[:,acc].T)
+ sphis = bold_ts.min(-1)
+ sphis -= sphis.mean()
+ print(sphis.shape)
+ niwrite(unmask(sphis,mask),aff,'sphis_hik.nii',head)
+
+ """
+ Find the global signal based on the T1-like effect
+ """
+ sol = np.linalg.lstsq(np.atleast_2d(sphis).T,dat)
+ glsig = sol[0]
+
+ """
+ T1 correct time series by regression
+ """
+ bold_noT1gs = bold_ts - np.dot(np.linalg.lstsq(glsig.T,bold_ts.T)[0].T,glsig)
+ niwrite(unmask(bold_noT1gs*Gstd[mask][:,np.newaxis],mask),aff,'hik_ts_OC_T1c.nii',head)
+
+ """
+ Make medn version of T1 corrected time series
+ """
+ niwrite(Gmu[:,:,:,np.newaxis]+unmask((bold_noT1gs+resid)*Gstd[mask][:,np.newaxis] ,mask),aff,'dn_ts_OC_T1c.nii',head)
+
+ """
+ Orthogonalize mixing matrix w.r.t. T1-GS
+ """
+ mmixnogs = mmix.T - np.dot(np.linalg.lstsq(glsig.T,mmix)[0].T,glsig)
+ mmixnogs_mu = mmixnogs.mean(-1)
+ mmixnogs_std = mmixnogs.std(-1)
+ mmixnogs_norm = (mmixnogs-mmixnogs_mu[:,np.newaxis])/mmixnogs_std[:,np.newaxis]
+ mmixnogs_norm = np.vstack([np.atleast_2d(np.ones(max(glsig.shape))),glsig,mmixnogs_norm])
+
+ """
+ Write T1-GS corrected components and mixing matrix
+ """
+ sol = np.linalg.lstsq(mmixnogs_norm.T,dat.T)
+ niwrite(unmask(sol[0].T[:,2:],mask),aff,'betas_hik_OC_T1c.nii',head)
+ np.savetxt('meica_mix_T1c.1D',mmixnogs)
+
+
+def write_split_ts(data,comptable,mmix, acc, rej, midk, head,suffix=''):
+ mdata = fmask(data,mask)
+ betas = fmask(get_coeffs(unmask((mdata.T-mdata.T.mean(0)).T,mask),mask,mmix),mask)
+ dmdata = mdata.T-mdata.T.mean(0)
+ varexpl = (1-((dmdata.T-betas.dot(mmix.T))**2.).sum()/(dmdata**2.).sum())*100
+ print('Variance explained: ', varexpl , '%')
+ midkts = betas[:,midk].dot(mmix.T[midk,:])
+ lowkts = betas[:,rej].dot(mmix.T[rej,:])
+ if len(acc)!=0:
+ niwrite(unmask(betas[:,acc].dot(mmix.T[acc,:]),mask),aff,'_'.join(['hik_ts',suffix])+'.nii',head)
+ if len(midk)!=0:
+ niwrite(unmask(midkts,mask),aff,'_'.join(['midk_ts',suffix])+'.nii',head)
+ if len(rej)!=0:
+ niwrite(unmask(lowkts,mask),aff,'_'.join(['lowk_ts',suffix])+'.nii',head)
+ niwrite(unmask(fmask(data,mask)-lowkts-midkts,mask),aff,'_'.join(['dn_ts',suffix])+'.nii',head)
+ return varexpl
+
+
+def writefeats(data,mmix,mask,head,suffix=''):
+ #Write feature versions of components
+ feats = computefeats2(data,mmix,mask)
+ niwrite(unmask(feats,mask),aff,'_'.join(['feats',suffix])+'.nii',head)
+
+
+def writect(comptable, nt, acc, rej, midk, empty, ctname='', varexpl='-1'):
+ nc = comptable.shape[0]
+ sortab = comptable[comptable[:,1].argsort()[::-1],:]
+ if ctname=='': ctname = 'comp_table.txt'
+ open('accepted.txt','w').write(','.join([str(int(cc)) for cc in acc]))
+ open('rejected.txt','w').write(','.join([str(int(cc)) for cc in rej]))
+ open('midk_rejected.txt','w').write(','.join([str(int(cc)) for cc in midk]))
+ with open(ctname,'w') as f:
+ f.write("#\n#ME-ICA Component statistics table for: %s #\n" % (os.path.abspath(os.path.curdir)) )
+ f.write("#Dataset variance explained by ICA (VEx): %.02f \n" % ( varexpl ) )
+ f.write("#Total components generated by decomposition (TCo): %i \n" % ( nc ) )
+ f.write("#No. accepted BOLD-like components, i.e. effective degrees of freedom for correlation (lower bound; DFe): %i\n" % ( len(acc) ) )
+ f.write("#Total number of rejected components (RJn): %i\n" % (len(midk)+len(rej)) )
+ f.write("#Nominal degress of freedom in denoised time series (..._medn.nii.gz; DFn): %i \n" % (nt-len(midk)-len(rej)) )
+ f.write("#ACC %s \t#Accepted BOLD-like components\n" % ','.join([str(int(cc)) for cc in acc]) )
+ f.write("#REJ %s \t#Rejected non-BOLD components\n" % ','.join([str(int(cc)) for cc in rej]) )
+ f.write("#MID %s \t#Rejected R2*-weighted artifacts\n" % ','.join([str(int(cc)) for cc in midk]) )
+ f.write("#IGN %s \t#Ignored components (kept in denoised time series)\n" % ','.join([str(int(cc)) for cc in empty]) )
+ f.write("#VEx TCo DFe RJn DFn \n")
+ f.write("##%.02f %i %i %i %i \n" % (varexpl,nc,len(acc),len(midk)+len(rej),nt-len(midk)-len(rej)))
+ f.write("# comp Kappa Rho %%Var %%Var(norm) \n")
+ for i in range(nc):
+ f.write('%d\t%f\t%f\t%.2f\t%.2f\n'%(sortab[i,0],sortab[i,1],sortab[i,2],sortab[i,3],sortab[i,4]))
+
+
+def writeresults(OCcatd, comptable, mmix, nt, acc, rej, midk, empty, head):
+ print("++ Writing optimally combined time series")
+ ts = OCcatd
+ niwrite(ts,aff,'ts_OC.nii',head)
+ print("++ Writing Kappa-filtered optimally combined timeseries")
+ varexpl = write_split_ts(ts,comptable,mmix,acc, rej, midk, head, suffix = 'OC')
+ print("++ Writing signal versions of components")
+ ts_B = get_coeffs(ts,mask,mmix)
+ niwrite(ts_B[:,:,:,:],aff,'_'.join(['betas','OC'])+'.nii', head)
+ if len(acc)!=0:
+ niwrite(ts_B[:,:,:,acc],aff,'_'.join(['betas_hik','OC'])+'.nii', head)
+ print("++ Writing optimally combined high-Kappa features")
+ writefeats(split_ts(ts,comptable,mmix,acc, rej, midk)[0],mmix[:,acc],mask,head, suffix = 'OC2')
+ print("++ Writing component table")
+ writect(comptable, nt, acc, rej, midk, empty, ctname='comp_table.txt',varexpl=varexpl)
+
+
+def writeresults_echoes(acc, rej, midk, head):
+ for ii in range(ne):
+ print("++ Writing Kappa-filtered TE#%i timeseries" % (ii+1))
+ write_split_ts(catd[:,:,:,ii,:],comptable, mmix,acc, rej, midk, head, suffix = 'e%i' % (ii+1))
+
+
+def main(options):
+ """
+
+ Args (and defaults):
+ data, tes, mixm=None, ctab=None, manacc=None, strict=False,
+ no_gscontrol=False, kdaw=10., rdaw=1., conv=2.5e-5, ste=-1,
+ combmode='t2s', dne=False, initcost='tanh', finalcost='tanh',
+ stabilize=False, fout=False, filecsdata=False, label=None,
+ fixed_seed=42
+ """
+ global tes, ne, catd, head, aff
+ tes = [float(te) for te in options.tes]
+ ne = len(tes)
+ catim = nib.load(options.data[0])
+
+ head = catim.get_header()
+ head.extensions = []
+ head.set_sform(head.get_sform(),code=1)
+ aff = catim.get_affine()
+ catd = cat2echos(catim.get_data(),ne)
+ nx,ny,nz,Ne,nt = catd.shape
+
+ #Parse options, prepare output directory
+ if options.fout: options.fout = aff
+ else: options.fout=None
+
+ global kdaw, rdaw
+ if not options.stabilize:
+ stabilize = False
+ else:
+ stabilize = True
+ kdaw = float(options.kdaw)
+ rdaw = float(options.rdaw)
+
+ if options.label!=None: dirname='%s' % '.'.join(['TED',options.label])
+ else: dirname='TED'
+ os.system('mkdir %s' % dirname)
+ if options.mixm!=None:
+ try: os.system('cp %s %s/meica_mix.1D; cp %s %s/%s' % (options.mixm,dirname,options.mixm,dirname,os.path.basename(options.mixm)))
+ except: pass
+ if options.ctab!=None:
+ try: os.system('cp %s %s/comp_table.txt; cp %s %s/%s' % (options.mixm,dirname,options.mixm,dirname,os.path.basename(options.mixm)))
+ except: pass
+
+ os.chdir(dirname)
+
+ print("++ Computing Mask")
+ global mask
+ mask,masksum = makeadmask(catd,minimum=False,getsum=True)
+
+ print("++ Computing T2* map")
+ global t2s, s0, t2ss, s0s, t2sG, s0G
+ t2s,s0,t2ss,s0s,t2sG,s0G = t2sadmap(catd,mask,tes,masksum,1)
+
+ #Condition values
+ cap_t2s = stats.scoreatpercentile(t2s.flatten(),99.5, interpolation_method='lower')
+ t2s[t2s>cap_t2s*10]=cap_t2s
+ niwrite(s0,aff,'s0v.nii',head)
+ niwrite(t2s,aff,'t2sv.nii',head)
+ niwrite(t2ss,aff,'t2ss.nii',head)
+ niwrite(s0s,aff,'s0vs.nii',head)
+ niwrite(s0G,aff,'s0vG.nii',head)
+ niwrite(t2sG,aff,'t2svG.nii',head)
+
+ #Optimally combine data
+ combmode = options.combmode
+ global OCcatd
+ OCcatd = optcom(catd,
+ t2sG,
+ tes,
+ mask,
+ combmode,
+ useG=True)
+
+ if not options.no_gscontrol:
+ gscontrol_raw(OCcatd, head)
+
+ if options.mixm == None:
+ print("++ Doing ME-PCA and ME-ICA")
+
+ nc,dd = tedpca(combmode, mask, stabilize, head, ste = options.ste)
+
+ mmix_orig = tedica(nc, dd, options.conv, options.fixed_seed, cost=options.initcost, final_cost = options.finalcost)
+ np.savetxt('__meica_mix.1D',mmix_orig)
+ seldict,comptable,betas,mmix = fitmodels_direct(catd,mmix_orig,mask,t2s,t2sG,tes,combmode,head, fout=options.fout,reindex=True)
+
+ np.savetxt('meica_mix.1D',mmix)
+ if 'GROUP0' in sys.argv:
+ group0_flag = True
+ else: group0_flag = False
+ acc,rej,midk,empty = selcomps(seldict,mmix,head, knobargs=options,group0_only=group0_flag,strict_mode=options.strict)
+ del dd
+ else:
+ mmix_orig = np.loadtxt('meica_mix.1D')
+ eim = eimask(np.float64(fmask(catd,mask)))==1
+ eimum = np.array(np.squeeze(unmask(np.array(eim,dtype=np.int).prod(1),mask)),dtype=np.bool)
+ seldict,comptable,betas,mmix = fitmodels_direct(catd,mmix_orig,mask,t2s,t2sG,tes,combmode,head,fout=options.fout)
+ if options.ctab == None:
+ acc,rej,midk,empty = selcomps(seldict,mmix,head, knobargs=options,strict_mode=options.strict)
+ else:
+ acc,rej,midk,empty = ctabsel(ctabfile)
+
+ if len(acc)==0:
+ print("\n** WARNING! No BOLD components detected!!! Please check data and results!\n")
+
+ writeresults(OCcatd, comptable, mmix, nt, acc, rej, midk, empty, head)
+ gscontrol_mmix(mmix, acc, rej, midk, empty, head)
+ if options.dne: writeresults_echoes(acc, rej, midk, head)
diff --git a/tedana/tests/__init__.py b/tedana/tests/__init__.py
new file mode 100644
index 000000000..b1c932756
--- /dev/null
+++ b/tedana/tests/__init__.py
@@ -0,0 +1,2 @@
+# emacs: -*- mode: python-mode; py-indent-offset: 4; tab-width: 4; indent-tabs-mode: nil -*-
+# ex: set sts=4 ts=4 sw=4 et:
diff --git a/tedana/tests/test_tedana.py b/tedana/tests/test_tedana.py
new file mode 100644
index 000000000..3a0c59915
--- /dev/null
+++ b/tedana/tests/test_tedana.py
@@ -0,0 +1,68 @@
+"""Tests for tedana."""
+
+import os.path
+from tedana.interfaces import tedana
+from tedana.cli import run
+import nibabel as nb
+import numpy as np
+from pathlib import Path
+
+def test_basic_tedana():
+ """
+ A very simple test, to confirm that tedana creates output
+ files.
+ """
+
+ parser = run.get_parser()
+ options = parser.parse_args(['-d', '/home/neuro/data/zcat_ffd.nii.gz',
+ '-e', '14.5', '38.5', '62.5'])
+ tedana.main(options)
+ assert os.path.isfile('comp_table.txt')
+
+
+def compare_nifti(fn, test_dir, res_dir):
+ """
+ Helper function to compare two niftis
+ """
+ res_fp = (res_dir/fn).as_posix()
+ test_fp = (test_dir/fn).as_posix()
+ assert np.allclose(nb.load(res_fp).get_data(), nb.load(test_fp).get_data())
+
+
+def test_outputs():
+ """
+ Compare the niftis specified in the below list again
+ """
+ nifti_test_list = [
+ '.cc_temp.nii.gz',
+ '.fcl_in.nii.gz',
+ '.fcl_out.nii.gz',
+ '__clin.nii.gz',
+ '__clout.nii.gz',
+ 'betas_hik_OC.nii',
+ 'betas_hik_OC_T1c.nii',
+ 'betas_OC.nii',
+ 'dn_ts_OC.nii',
+ 'dn_ts_OC_T1c.nii',
+ 'feats_OC2.nii',
+ 'hik_ts_OC.nii',
+ 'hik_ts_OC_T1c.nii',
+ 'lowk_ts_OC.nii',
+ 'midk_ts_OC.nii',
+ 's0v.nii',
+ 's0vG.nii',
+ 's0vs.nii',
+ 'sphis_hik.nii',
+ 'T1gs.nii',
+ 't2ss.nii',
+ 't2sv.nii',
+ 't2svG.nii',
+ 'ts_OC.nii',
+ 'tsoc_nogs.nii',
+ 'tsoc_orig.nii',
+ 'veins_l0.nii',
+ 'veins_l1.nii']
+ test_dir = Path('/home/neuro/data/test_res/')
+ res_dir = Path('/home/neuro/code/TED/')
+ for fn in nifti_test_list:
+ compare_nifti(fn, test_dir, res_dir)
\ No newline at end of file
diff --git a/tedana/utils/__init__.py b/tedana/utils/__init__.py
new file mode 100644
index 000000000..b1c932756
--- /dev/null
+++ b/tedana/utils/__init__.py
@@ -0,0 +1,2 @@
+# emacs: -*- mode: python-mode; py-indent-offset: 4; tab-width: 4; indent-tabs-mode: nil -*-
+# ex: set sts=4 ts=4 sw=4 et:
diff --git a/tedana/utils/utils.py b/tedana/utils/utils.py
new file mode 100644
index 000000000..e6f792fde
--- /dev/null
+++ b/tedana/utils/utils.py
@@ -0,0 +1,387 @@
+"""Utilities for meica package"""
+import numpy as np
+import nibabel as nib
+from scipy.optimize import leastsq
+from scipy.stats import scoreatpercentile
+
+from ..due import due, BibTeX
+
+def cat2echos(data, Ne):
+ """
+ Separates z- and echo-axis in `data`
+
+ Parameters
+ ----------
+ data : array_like
+ Array of shape (nx, ny, nz*Ne, nt)
+ Ne : int
+ Number of echoes that were in original (uncombined) data array
+
+ Returns
+ -------
+ ndarray
+ Array of shape (nx, ny, nz, Ne, nt)
+ """
+
+ nx, ny = data.shape[0:2]
+ nz = data.shape[2] // Ne
+ if len(data.shape) > 3:
+ nt = data.shape[3]
+ else:
+ nt = 1
+ return np.reshape(data, (nx, ny, nz, Ne, nt), order='F')
+
+def makeadmask(cdat, minimum=True, getsum=False):
+ """
+ Create a mask.
+ """
+ nx, ny, nz, Ne, _ = cdat.shape
+
+ mask = np.ones((nx, ny, nz), dtype=np.bool)
+
+ if minimum:
+ mask = cdat.prod(axis=-1).prod(-1) != 0
+ return mask
+ else:
+ #Make a map of longest echo that a voxel can be sampled with,
+ #with minimum value of map as X value of voxel that has median
+ #value in the 1st echo. N.b. larger factor leads to bias to lower TEs
+ emeans = cdat.mean(-1)
+ medv = emeans[:, :, :, 0] == scoreatpercentile(emeans[:, :, :, 0][emeans[:, :, :, 0] != 0],
+ 33, interpolation_method='higher')
+ lthrs = np.squeeze(np.array([emeans[:, :, :, ee][medv] / 3 for ee in range(Ne)]))
+
+ if len(lthrs.shape) == 1:
+ lthrs = np.atleast_2d(lthrs).T
+ lthrs = lthrs[:, lthrs.sum(0).argmax()]
+
+ mthr = np.ones([nx, ny, nz, Ne])
+ for i_echo in range(Ne):
+ mthr[:, :, :, i_echo] *= lthrs[i_echo]
+ mthr = np.abs(emeans) > mthr
+ masksum = np.array(mthr, dtype=np.int).sum(-1)
+ mask = masksum != 0
+ if getsum:
+ return mask, masksum
+ else:
+ return mask
+
+def uncat2echos(data, Ne):
+ """
+ Combines z- and echo-axis in `data`
+
+ Parameters
+ ----------
+ data : array_like
+ Array of shape (nx, ny, nz, Ne, nt)
+ Ne : int
+ Number of echoes; should be equal to `data.shape[3]`
+
+ Returns
+ -------
+ ndarray
+ Array of shape (nx, ny, nz*Ne, nt)
+ """
+
+ nx, ny = data.shape[0:2]
+ nz = data.shape[2] * Ne
+ if len(data.shape) > 4:
+ nt = data.shape[4]
+ else:
+ nt = 1
+ return np.reshape(data, (nx, ny, nz, nt), order='F')
+
+
+def make_mask(catdata):
+ """
+ Generates a 3D mask of `catdata`
+
+ Only voxels that are consistently (i.e., across time AND echoes) non-zero
+ in `catdata` are True in output
+
+ Parameters
+ ----------
+ catdata : (X x Y x Z x E x T) array_like
+ Multi-echo data array, where X, Y, Z are spatial dimensions, E
+ corresponds to individual echo data, and T is time
+
+ Returns
+ -------
+ mask : (X x Y x Z) np.ndarray
+ Boolean array
+ """
+
+ catdata = np.asarray(catdata)
+ return catdata.prod(axis=-1).prod(axis=-1).astype('bool')
+
+
+def make_opt_com(medata):
+ """
+ Makes optimal combination from input multi-echo data
+
+ Parameters
+ ----------
+ medata : tedana.interfaces.data.MultiEchoData
+ """
+
+ pass
+
+
+def fmask(data, mask):
+ """
+ Masks `data` with non-zero entries of `mask`
+
+ Parameters
+ ----------
+ data : array_like
+ Array of shape (nx, ny, nz[, Ne[, nt]])
+ mask : array_like
+ Boolean array of shape (nx, ny, nz)
+
+ Returns
+ -------
+ ndarray
+ Masked array of shape (nx*ny*nz[, Ne[, nt]])
+ """
+
+ s = data.shape
+
+ N = s[0] * s[1] * s[2]
+ news = []
+ news.append(N)
+
+ if len(s) > 3:
+ news.extend(s[3:])
+
+ tmp1 = np.reshape(data, news)
+ fdata = tmp1.compress((mask > 0).ravel(), axis=0)
+
+ return fdata.squeeze()
+
+
+def unmask(data, mask):
+ """
+ Unmasks `data` using non-zero entries of `mask`
+
+ Parameters
+ ----------
+ data : array_like
+ Masked array of shape (nx*ny*nz[, Ne[, nt]])
+ mask : array_like
+ Boolean array of shape (nx, ny, nz)
+
+ Returns
+ -------
+ ndarray
+ Array of shape (nx, ny, nz[, Ne[, nt]])
+ """
+
+ M = (mask != 0).ravel()
+ Nm = M.sum()
+
+ nx, ny, nz = mask.shape
+
+ if len(data.shape) > 1:
+ nt = data.shape[1]
+ else:
+ nt = 1
+
+ out = np.zeros((nx * ny * nz, nt), dtype=data.dtype)
+ out[M, :] = np.reshape(data, (Nm, nt))
+
+ return np.squeeze(np.reshape(out, (nx, ny, nz, nt)))
+
+
+def moments(data):
+ """
+ Returns gaussian parameters of a 2D distribution by calculating its moments
+
+ Parameters
+ ----------
+ data : array_like
+ 2D data array
+
+ Returns
+ -------
+ height : float
+ center_x : float
+ center_y : float
+ width_x : float
+ width_y : float
+
+ References
+ ----------
+ `Scipy Cookbook`_
+
+ .. _Scipy Cookbook: http://scipy-cookbook.readthedocs.io/items/FittingData.html#Fitting-a-2D-gaussian
+ """
+
+ total = data.sum()
+ X, Y = np.indices(data.shape)
+ center_x = (X * data).sum() / total
+ center_y = (Y * data).sum() / total
+ col = data[:, int(center_y)]
+ width_x = np.sqrt(abs((np.arange(col.size) - center_y)**2 * col).sum() / col.sum())
+ row = data[int(center_x), :]
+ width_y = np.sqrt(abs((np.arange(row.size) - center_x)**2 * row).sum() / row.sum())
+ height = data.max()
+ return height, center_x, center_y, width_x, width_y
+
+
+def gaussian(height, center_x, center_y, width_x, width_y):
+ """
+ Returns gaussian function
+
+ Parameters
+ ----------
+ height : float
+ center_x : float
+ center_y : float
+ width_x : float
+ width_y : float
+
+ Returns
+ -------
+ lambda
+ Gaussian function with provided parameters
+
+ References
+ ----------
+ `Scipy Cookbook`_
+
+ .. _Scipy Cookbook: http://scipy-cookbook.readthedocs.io/items/FittingData.html#Fitting-a-2D-gaussian
+ """
+
+ width_x = float(width_x)
+ width_y = float(width_y)
+ return lambda x, y: height * np.exp(-(((center_x - x) / width_x)**2 + ((center_y - y) / width_y)**2) / 2)
+
+
+def fitgaussian(data):
+ """
+ Returns estimated gaussian parameters of a 2D distribution found by a fit
+
+ Parameters
+ ----------
+ data : array_like
+ 2D data array
+
+ Returns
+ -------
+ p : array_like
+ Array with height, center_x, center_y, width_x, width_y of `data`
+
+ References
+ ----------
+ `Scipy Cookbook`_
+
+ .. _Scipy Cookbook: http://scipy-cookbook.readthedocs.io/items/FittingData.html#Fitting-a-2D-gaussian
+ """
+
+ params = moments(data)
+ errorfunction = lambda p, data: np.ravel(gaussian(*p)(*np.indices(data.shape)) - data)
+ (p, _) = leastsq(errorfunction, params, data)
+ return p
+
+
+def niwrite(data, affine, name, head, header=None):
+ """
+ Write out nifti file.
+ """
+ data[np.isnan(data)] = 0
+ if header is None:
+ this_header = head.copy()
+ this_header.set_data_shape(list(data.shape))
+ else:
+ this_header = header
+
+ outni = nib.Nifti1Image(data, affine, header=this_header)
+ outni.set_data_dtype('float64')
+ outni.to_filename(name)
+
+
+@due.dcite(BibTeX('@article{dice1945measures,'\
+ 'author={Dice, Lee R},'\
+ 'title={Measures of the amount of ecologic association between species},'\
+ 'year = {1945},'\
+ 'publisher = {Wiley Online Library},'\
+ 'journal = {Ecology},'\
+ 'volume={26},'\
+ 'number={3},'\
+ 'pages={297--302}}'),
+ description='Introduction of Sorenson-Dice index by Dice in 1945.')
+@due.dcite(BibTeX('@article{sorensen1948method,'\
+ 'author={S{\\o}rensen, Thorvald},'\
+ 'title={A method of establishing groups of equal amplitude '\
+ 'in plant sociology based on similarity of species and its '\
+ 'application to analyses of the vegetation on Danish commons},'\
+ 'year = {1948},'\
+ 'publisher = {Wiley Online Library},'\
+ 'journal = {Biol. Skr.},'\
+ 'volume={5},'\
+ 'pages={1--34}}'),
+ description='Introduction of Sorenson-Dice index by Sorenson in 1948.')
+def dice(arr1, arr2):
+ """
+ Compute Dice's similarity index between two numpy arrays. Arrays will be
+ binarized before comparison.
+
+ Parameters
+ ----------
+ arr1, arr2 : array-like
+ Input arrays, arrays to binarize and compare.
+
+ Returns
+ -------
+ dsi : float
+ Dice-Sorenson index.
+
+ References
+ ----------
+ REF_
+
+ .. _REF: https://gist.github.com/brunodoamaral/e130b4e97aa4ebc468225b7ce39b3137
+ """
+ arr1 = np.array(arr1 != 0).astype(int)
+ arr2 = np.array(arr2 != 0).astype(int)
+
+ if arr1.shape != arr2.shape:
+ raise ValueError('Shape mismatch: arr1 and arr2 must have the same shape.')
+
+ arr_sum = arr1.sum() + arr2.sum()
+ if arr_sum == 0:
+ dsi = 0
+ else:
+ intersection = np.logical_and(arr1, arr2)
+ dsi = (2. * intersection.sum()) / arr_sum
+
+ return dsi
+
+
+def andb(arrs):
+ """
+ Sums arrays in `arrs`
+
+ Parameters
+ ----------
+ arrs : list
+ List of boolean or integer arrays to be summed
+
+ Returns
+ -------
+ result : ndarray
+ Integer array of summed `arrs`
+ """
+
+ same_shape = []
+ for arr in arrs:
+ for arr2 in arrs:
+ same_shape.append(arr.shape == arr2.shape)
+
+ if not np.all(same_shape):
+ raise ValueError('All input arrays must have same shape')
+
+ result = np.zeros(arrs[0].shape)
+ for arr in arrs:
+ result += np.array(arr, dtype=np.int)
+ return result
diff --git a/tedana/workflows/__init__.py b/tedana/workflows/__init__.py
new file mode 100644
index 000000000..b1c932756
--- /dev/null
+++ b/tedana/workflows/__init__.py
@@ -0,0 +1,2 @@
+# emacs: -*- mode: python-mode; py-indent-offset: 4; tab-width: 4; indent-tabs-mode: nil -*-
+# ex: set sts=4 ts=4 sw=4 et: