Skip to content

Commit

Permalink
Constraints and Landmarks (#355)
Browse files Browse the repository at this point in the history
* Update vtk.js version to use new classes

* Use new vtk.js classes to make landmarks moveable

* Fix landmark cube sizing

* clear cached widgets on project change

* Change vtk.js version specification

* Allow user to change landmarkInfo client-side

* Save landmark changes to server

* Add landmarkSize input

* Update yarn.lock

* Update table interface for adding and deleting landmarks (still needs implementation)

* Allow user to place new landmarks

* Save landmark changes to project swproj file

* Fix infinite loop in DataList update

* Fixes for creating landmarks on multidomain

* Update landmark position on drag

* Use allSetLandmarks as source of truth for positions

* Copy vtk.js SeedWidget and modifiy handleMouseMove

* Assign landmark IDs upon fetch

* Minor usability changes

* Add check to find world coords in landmark widget

* Refactoring and simplifying logic to fix strange behavior

* Fix landmark deletions

* Minor behavior fixes

* Remove console log

* Lint fix

* Update models such that constraints exist "per-project"

* Mimic Landmarks behavior for Constraints component

* Fix landmarks suffix

* Refer to domain explicitly instead of by index in anatomy list and actor list
(Include domain on vtkPolyData field data, restructure stored placements for landmarks and constraints)

* Reverse order of vectors in normal calculation (right hand rule)

* Put back InfoTab component to handle showing one layer at a time

* Read free form constraint data from file

* Change `updateColors` to use current constraint data
instead of widget state, begin coloring by paint constraints

* Fix constraint color function

* Set plane widget outline invisible

* Lint fix

* Save edited constraint information to DB...
+ convert between three-points and origin-normal plane representations

* Use KDTree to find nearest points in paint constraint data

* Allow new plane placements

* Reassign constraint location data when reassigning constraint ids by index

* Increase render debounce time to reduce redundant calls

* Fix normals flipping when z is negative (could have a better solution)

* Add ability to edit paint constraints (still slow and buggy)

* Add visibility toggle column to constraints table
... to avoid visual & computational overload with many constraints

* Small UI fixes

* Fix conflicting CSS rules

* Lighten constraint gray for better contrast with background

* Pin node version in yarn container

* Fix PlaneWidget styling TODOs

* refactor: use patchwise widget updates for state sync

* fix paint and plane widget behavior for updating shape colors

* Add a toggle to switch between exclusion/inclusion for constraint painting

* Update dependencies

* Prevent remaining occurrences of `model._openGLRenderer is undefined` errors by blocking events during `renderGrid`

* fix `updateColors`: paintwidget data should not override other constraint coloring

* increase constraint column width; don't cut off wide buttons

* Remove unused widgets after a landmark or constraint deletion

* Fix various multi-renderer-only bugs

* Don't show a plane widget with undefined origin or normal

* Revert changes to syncCameras function; simplified version doesn't handle subjects with different centers

* Filter undefined locations when saving landmark data

* Fix swcc pydantic error about `mean_particles` scope

* Prevent render loading spinner appearing when no subjects selected

* "Show subject" button should become "Hide subject" instead of disappearing

* When setting a constraint location, reassign allSetConstraints for changesMade watcher

* swcc: specify file name in download location to eliminate same-name conflicts
(like with landmarks.csv and constraints.json)

* Ensure landmark info is saved in a schema compatible with Studio

* Add the ability to name constraints

* Add warning message when any constraints with no placements exist; those constraints cannot be saved.

* Fix lint and type failures

* fix: add new keyword to UInt8Array creation

* fix creation of new paint constraint data

* fix: protect from undefined widget managers after hiding subjects

* fix: don't use typed array for new paint constraint scalars; uint8array serializes as object instead of list.

* fix: call getLandmarks and getConstraints during the first render when the layer is enabled (even if no shapes are selected for rendering)

* fix: L&C deletion behavior: remove all references to invalid widget

* fix: refactor plane widget end interaction event

* fix: prevent `c is undefined` bug when enabling Particles layer

* fix: Use loading flags for landmarks and constraints to prevent actions before fetching is complete

* fix: render after landmark info (including color) has been updated

* fix: reassign `allSetLandmarks` by index when a landmark is deleted

* fix: reduce debounce time for render refresh
  • Loading branch information
annehaley authored Mar 18, 2024
1 parent b4c2e73 commit a2b786b
Show file tree
Hide file tree
Showing 26 changed files with 5,114 additions and 2,850 deletions.
27 changes: 27 additions & 0 deletions shapeworks_cloud/core/migrations/0037_landmarks_constraints.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
# Generated by Django 3.2.21 on 2023-11-13 17:30

from django.db import migrations, models
import django.db.models.deletion


class Migration(migrations.Migration):
dependencies = [
('core', '0036_analysis_multi_domain'),
]

operations = [
migrations.RemoveField(
model_name='constraints',
name='optimized_particles',
),
migrations.AddField(
model_name='constraints',
name='project',
field=models.ForeignKey(
null=True,
on_delete=django.db.models.deletion.CASCADE,
related_name='constraints',
to='core.project',
),
),
]
7 changes: 2 additions & 5 deletions shapeworks_cloud/core/models.py
Original file line number Diff line number Diff line change
Expand Up @@ -271,11 +271,8 @@ class Constraints(TimeStampedModel, models.Model):
file = S3FileField()
subject = models.ForeignKey(Subject, on_delete=models.CASCADE, related_name='constraints')
anatomy_type = models.CharField(max_length=255)
optimized_particles = models.ForeignKey(
OptimizedParticles,
on_delete=models.SET_NULL,
related_name='constraints',
null=True,
project = models.ForeignKey(
Project, on_delete=models.CASCADE, related_name='constraints', null=True
)


Expand Down
101 changes: 101 additions & 0 deletions shapeworks_cloud/core/rest.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,12 @@
import base64
import json
import os
from pathlib import Path
from tempfile import TemporaryDirectory, gettempdir
from typing import Dict, Type

from django.contrib.auth import logout
from django.core.files.base import ContentFile
from django.db.models import Q
from django.utils import timezone
from django_filters.rest_framework import DjangoFilterBackend
Expand Down Expand Up @@ -313,6 +315,105 @@ def download(self, request, **kwargs):
project = self.get_object()
return Response(serializers.ProjectDownloadSerializer(project).data)

@action(
detail=True,
url_path='landmarks',
url_name='landmarks',
methods=['POST'],
)
def set_landmarks(self, request, **kwargs):
project = self.get_object()
form_data = request.data
landmarks_info = form_data.get('info')
landmarks_locations = form_data.get('locations')

project.landmarks_info = landmarks_info
project_file_contents = json.loads(project.file.read())
project_file_contents['landmarks'] = landmarks_info
project.file.save(
project.file.name.split('/')[-1],
ContentFile(json.dumps(project_file_contents).encode()),
)
project.save()

ids_existing_with_coords = []
for subject_id, data in landmarks_locations.items():
target_subject = models.Subject.objects.get(id=subject_id)
for anatomy_type, locations in data.items():
landmarks_object, created = models.Landmarks.objects.get_or_create(
project=project, subject=target_subject, anatomy_type=anatomy_type
)
file_content = ''
if (
locations is not None
and len(locations) > 0
and locations[0] is not None
and len(locations[0]) == 3
):
file_content = '\n'.join(
' '.join(str(n) for n in (loc.values() if isinstance(loc, dict) else loc))
for loc in locations
)
file_name = 'landmarks.csv'
if landmarks_object.file:
file_name = landmarks_object.file.name.split('/')[-1]
landmarks_object.file.save(
file_name,
ContentFile(file_content.encode()),
)
ids_existing_with_coords.append(landmarks_object.id)

models.Landmarks.objects.filter(project=project).exclude(
id__in=ids_existing_with_coords
).delete()

log_write_access(
timezone.now(),
self.request.user.username,
'Set Project Landmarks',
project.id,
)
return Response(serializers.ProjectReadSerializer(project).data)

@action(
detail=True,
url_path='constraints',
url_name='constraints',
methods=['POST'],
)
def set_constraints(self, request, **kwargs):
project = self.get_object()
form_data = request.data
constraints_locations = form_data.get('locations')

ids_existing_with_coords = []
for subject_id, data in constraints_locations.items():
target_subject = models.Subject.objects.get(id=subject_id)
for anatomy_type, locations in data.items():
constraints_object, created = models.Constraints.objects.get_or_create(
project=project, subject=target_subject, anatomy_type=anatomy_type
)
file_name = 'constraints.json'
if constraints_object.file:
file_name = constraints_object.file.name.split('/')[-1]
constraints_object.file.save(
file_name,
ContentFile(json.dumps(locations).encode()),
)
ids_existing_with_coords.append(constraints_object.id)

models.Constraints.objects.filter(project=project).exclude(
id__in=ids_existing_with_coords
).delete()

log_write_access(
timezone.now(),
self.request.user.username,
'Set Project Constraints',
project.id,
)
return Response(serializers.ProjectReadSerializer(project).data)

@action(
detail=True,
url_path='groom',
Expand Down
38 changes: 30 additions & 8 deletions shapeworks_cloud/core/serializers.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,14 @@ class Meta:
fields = '__all__'


class ConstraintsSerializer(serializers.ModelSerializer):
file = S3FileSerializerField()

class Meta:
model = models.Constraints
fields = '__all__'


class CachedAnalysisGroupSerializer(serializers.ModelSerializer):
class Meta:
model = models.CachedAnalysisGroup
Expand Down Expand Up @@ -72,6 +80,7 @@ class ProjectReadSerializer(serializers.ModelSerializer):
file = S3FileSerializerField()
last_cached_analysis = CachedAnalysisReadSerializer(allow_null=True)
landmarks = LandmarksSerializer(many=True)
constraints = ConstraintsSerializer(many=True)

class Meta:
model = models.Project
Expand Down Expand Up @@ -168,18 +177,17 @@ class Meta:
fields = '__all__'


class ConstraintsSerializer(serializers.ModelSerializer):
file = S3FileSerializerField()

class Meta:
model = models.Constraints
fields = '__all__'


class GroomedSegmentationSerializer(serializers.ModelSerializer):
file = S3FileSerializerField()
pre_cropping = S3FileSerializerField(required=False, allow_null=True)
pre_alignment = S3FileSerializerField(required=False, allow_null=True)
anatomy_type = serializers.SerializerMethodField('get_anatomy_type')

def get_anatomy_type(self, obj):
if obj.segmentation:
return obj.segmentation.anatomy_type
else:
return None

class Meta:
model = models.GroomedSegmentation
Expand All @@ -190,6 +198,13 @@ class GroomedMeshSerializer(serializers.ModelSerializer):
file = S3FileSerializerField()
pre_cropping = S3FileSerializerField(required=False, allow_null=True)
pre_alignment = S3FileSerializerField(required=False, allow_null=True)
anatomy_type = serializers.SerializerMethodField('get_anatomy_type')

def get_anatomy_type(self, obj):
if obj.mesh:
return obj.mesh.anatomy_type
else:
return None

class Meta:
model = models.GroomedMesh
Expand All @@ -215,6 +230,13 @@ class OptimizedParticlesReadSerializer(OptimizedParticlesSerializer):
class ReconstructedSampleSerializer(serializers.ModelSerializer):
file = S3FileSerializerField()
particles = OptimizedParticlesReadSerializer(required=False)
anatomy_type = serializers.SerializerMethodField('get_anatomy_type')

def get_anatomy_type(self, obj):
if obj.particles:
return obj.particles.anatomy_type
else:
return None

class Meta:
model = models.ReconstructedSample
Expand Down
1 change: 1 addition & 0 deletions swcc/swcc/models/constants.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,5 +15,6 @@
'groomed',
'local',
'world',
'landmarks_file',
'constraints',
]
7 changes: 5 additions & 2 deletions swcc/swcc/models/file.py
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ def upload(self):

return self.field_value

def download(self, path: Union[str, Path]) -> Path:
def download(self, path: Union[str, Path], file_name=None) -> Path:
from .utils import raise_for_status

if self.url is None:
Expand All @@ -75,7 +75,10 @@ def download(self, path: Union[str, Path]) -> Path:
if not path.is_dir():
path.mkdir(parents=True, exist_ok=True)

path = path / self.url.path.split('/')[-1]
if file_name:
path = path / Path(file_name)
else:
path = path / self.url.path.split('/')[-1]
r = requests.get(self.url, stream=True)
raise_for_status(r)

Expand Down
2 changes: 1 addition & 1 deletion swcc/swcc/models/other_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ class Constraints(ApiModel):
file_source: Union[str, Path]
subject: Subject
anatomy_type: str = Field(min_length=1)
optimized_particles: Optional[OptimizedParticles]
project: Project


class CachedAnalysisGroup(ApiModel):
Expand Down
38 changes: 22 additions & 16 deletions swcc/swcc/models/project.py
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,6 @@ def relative_path(filepath):
groomed_shape: Union[GroomedMesh, GroomedSegmentation, None] = None
world_particles_path = None
local_particles_path = None
particles = None
constraints_path = None
transform = None

Expand Down Expand Up @@ -183,7 +182,7 @@ def relative_path(filepath):
transform = Path(temp_dir) / 'transform'
with transform.open('w') as f:
f.write(value)
elif key == 'landmarks':
elif key == 'landmarks_file':
Landmarks(
file_source=relative_path(value),
subject=subject,
Expand All @@ -201,7 +200,7 @@ def relative_path(filepath):
groomed_mesh = groomed_shape
elif type(groomed_shape) == GroomedSegmentation:
groomed_segmentation = groomed_shape
particles = OptimizedParticles(
OptimizedParticles(
world_source=world_particles_path,
local_source=local_particles_path,
transform_source=transform,
Expand All @@ -212,12 +211,15 @@ def relative_path(filepath):
anatomy_type=anatomy_id,
).create()
if constraints_path:
Constraints(
file_source=constraints_path,
subject=subject,
optimized_particles=particles,
anatomy_type=anatomy_id,
).create()
with open(constraints_path) as f:
constraints_contents = json.load(f)
if constraints_contents:
Constraints(
file_source=constraints_path,
subject=subject,
project=self.project,
anatomy_type=anatomy_id,
).create()

def load_analysis_from_json(self, file_path):
project_root = Path(str(self.project.file.path)).parent
Expand All @@ -234,12 +236,12 @@ def load_analysis_from_json(self, file_path):
for mean_particle_path in contents['mean']['particle_files']:
mean_particles.append(analysis_file_location.parent / Path(mean_particle_path))

for i in range(len(mean_shapes)):
cams = CachedAnalysisMeanShape(
file_source=mean_shapes[i],
particles_source=mean_particles[i] if mean_particles else None,
).create()
mean_shapes_cache.append(cams)
for i in range(len(mean_shapes)):
cams = CachedAnalysisMeanShape(
file_source=mean_shapes[i],
particles_source=mean_particles[i] if mean_particles else None,
).create()
mean_shapes_cache.append(cams)

modes = []
for mode in contents['modes']:
Expand Down Expand Up @@ -402,7 +404,11 @@ def download(self, folder: Union[str, Path]):
print_progress_bar(0, len(files))
for index, (path, url) in enumerate(files.items()):
file_item: File = File(url)
file_item.download(Path(folder, *path.split('/')[:-1]))
path_split = path.split('/')
file_item.download(
Path(folder, *path_split[:-1]),
file_name=path_split[-1],
)
print_progress_bar(index + 1, len(files))
session.close()
print()
Expand Down
3 changes: 2 additions & 1 deletion web/shapeworks/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,10 @@
"echarts": "^5.4.1",
"itk": "^14.1.1",
"json-schema-defaults": "^0.4.0",
"kd-tree-javascript": "1.0.0",
"lodash": "^4.17.21",
"shader-loader": "^1.3.1",
"vtk.js": "*",
"vtk.js": ">=29.4.6",
"vue": "^2.7.14",
"vue-echarts": "^6.2.4",
"vue-router": "^3.2.0",
Expand Down
22 changes: 21 additions & 1 deletion web/shapeworks/src/api/rest.ts
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
import { AnalysisParams, DataObject, Dataset, Project, Subject } from "@/types";
import { AnalysisParams, DataObject, Dataset, LandmarkInfo, Constraints, Project, Subject } from "@/types";
import { apiClient } from "./auth";
import { loadGroomedShapeForObject, loadParticlesForObject } from "@/store";

Expand Down Expand Up @@ -154,3 +154,23 @@ export async function deleteTaskProgress(taskId: number){
export async function abortTask(taskId: number) {
return (await apiClient.post(`/task-progress/${taskId}/abort/`)).data
}

export async function saveLandmarkData(
projectId: number,
landmarkInfo: LandmarkInfo[],
landmarkLocations: Record<number, Record<number, number[][]>>
) {
return (await apiClient.post(`/projects/${projectId}/landmarks/`, {
info: landmarkInfo,
locations: landmarkLocations,
})).data
}

export async function saveConstraintData(
projectId: number,
constraintLocations: Record<number, Record<number, number[][]>>
) {
return (await apiClient.post(`/projects/${projectId}/constraints/`, {
locations: constraintLocations,
})).data
}
Loading

0 comments on commit a2b786b

Please sign in to comment.