Skip to content

Ewm7122 implement artificial normalization algo #460

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 9 commits into from
Sep 27, 2024

Conversation

darshdinger
Copy link
Contributor

Description of work

This work is to facilitate the addition of artificial normalization within the reduction workflow in cases where a calibration is non-existent or a normalization is missing. For now, this story deals with the creation of the algorithm that executes the backend logic to produce an artificial normalization.

Explanation of work

Within this work, a new algorithm named CreateArtificialNormalizationAlgo is introduced. Its purpose is to create an artificial normalization workspace, ensuring that reduction workflows remain uninterrupted even when calibration data is missing. The algorithm takes in diffraction data, applies various transformations such as peak clipping and optional smoothing, and outputs a new workspace that simulates normalized data.

Inputs:

  • InputWorkspace - (units DSpacing, EMode="Elastic")
  • Ingredients = (
    PeakWindowClippingSize = Int (default 10),
    SmoothingParameter = Double,
    decreaseParameter = boolean (default True),
    LSS = boolean (default True)
  • OutputWorkspaceName

Output:

  • Workspace containing Artificial Normalization

The algorithm processes each histogram in the input workspace, clipping peaks, smoothing the data, and applying transformations as specified by the ingredients. It clones the input workspace and modifies the data to produce the desired output, allowing the reduction process to continue even in the absence of proper calibration or normalization files.

To test

Dev testing

Please insure all pytests pass. There is a cis_test script created that can be used to test called artificial_norm_script.

CIS testing

Please take a look at the cis_test mentioned above for testing. There are two cases presented here which will be reflected within the final implementation in SNAPRed:

  1. Calibration exists without Normalization (in this case, we load calibrated diffraction focused workspace as input)
  2. Calibration does not exist (in this case, we take the raw input data and apply FocusSpectraAlgorithm to it to create artificial norm)
# Use this script to test Diffraction Calibration
from mantid.simpleapi import *
import matplotlib.pyplot as plt
import numpy as np
import json
from typing import List


## for creating ingredients
from snapred.backend.dao.request.FarmFreshIngredients import FarmFreshIngredients
from snapred.backend.dao.ingredients.ArtificialNormalizationIngredients import ArtificialNormalizationIngredients as FakeNormIngredients
from snapred.backend.service.SousChef import SousChef

## for loading data
from snapred.backend.dao.ingredients.GroceryListItem import GroceryListItem
from snapred.backend.data.GroceryService import GroceryService

## the code to test
from snapred.backend.recipe.algorithm.PixelDiffractionCalibration import PixelDiffractionCalibration as PixelAlgo
from snapred.backend.recipe.algorithm.GroupDiffractionCalibration import GroupDiffractionCalibration as GroupAlgo
from snapred.backend.recipe.algorithm.CreateArtificialNormalizationAlgo import CreateArtificialNormalizationAlgo as FakeNormAlgo
from snapred.backend.recipe.DiffractionCalibrationRecipe import DiffractionCalibrationRecipe as Recipe

# for running through service layer
from snapred.backend.service.CalibrationService import CalibrationService
from snapred.backend.dao.request.DiffractionCalibrationRequest import DiffractionCalibrationRequest

from snapred.meta.Config import Config

#User input ###########################
runNumber = "58882"
groupingScheme = "Column"
cifPath = "/SNS/users/dzj/Calibration_next/CalibrantSamples/cif/Silicon_NIST_640d.cif"
calibrantSamplePath = "/SNS/users/dzj/Calibration_next/CalibrantSamples/Silicon_NIST_640D_001.json"
peakThreshold = 0.05
offsetConvergenceLimit = 0.1
isLite = True
Config._config["cis_mode"] = False
#######################################

### PREP INGREDIENTS ################
farmFresh = FarmFreshIngredients(
    runNumber=runNumber,
    useLiteMode=isLite,
    focusGroups=[{"name": groupingScheme, "definition": ""}],
    cifPath=cifPath,
    calibrantSamplePath=calibrantSamplePath,
    convergenceThreshold=offsetConvergenceLimit,
    maxOffset=100.0,
)
ingredients = SousChef().prepDiffractionCalibrationIngredients(farmFresh)

### FETCH GROCERIES ##################

clerk = GroceryListItem.builder()
clerk.neutron(runNumber).useLiteMode(isLite).add()
clerk.fromRun(runNumber).grouping(groupingScheme).useLiteMode(isLite).add()
groceries = GroceryService().fetchGroceryList(clerk.buildList())

### RUN PIXEL CALIBRATION ##########
pixelAlgo = PixelAlgo()
pixelAlgo.initialize()
pixelAlgo.setPropertyValue("Ingredients", ingredients.json())
pixelAlgo.setPropertyValue("InputWorkspace",groceries[0])
pixelAlgo.setPropertyValue("GroupingWorkspace", groceries[1])
pixelAlgo.execute()

assert False

median = json.loads(pixelAlgo.getPropertyValue("data"))["medianOffset"]
print(median)

count = 0
while median > offsetConvergenceLimit or count < 5:
    pixelAlgo.execute()
    median = json.loads(pixelAlgo.getPropertyValue("data"))["medianOffset"]
    count += 1
    
### RUN GROUP CALIBRATION

DIFCprev = pixelAlgo.getPropertyValue("CalibrationTable")

outputWS = mtd.unique_name(prefix="output_")
groupAlgo = GroupAlgo()
groupAlgo.initialize()
groupAlgo.setPropertyValue("Ingredients", ingredients.json())
groupAlgo.setPropertyValue("InputWorkspace", groceries[0])
groupAlgo.setPropertyValue("GroupingWorkspace", groceries[1])
groupAlgo.setPropertyValue("PreviousCalibrationTable", DIFCprev)
groupAlgo.setPropertyValue("OutputWorkspace", outputWS)
groupAlgo.execute()

### PAUSE
"""
Stop here and examine the fits.
Make sure the diffraction focused TOF workspace looks as expected.
Make sure the offsets workspace has converged, the DIFCpd only fit pixels inside the group,
and that the fits match with the TOF diffraction focused workspace.
"""
assert False

def convertIngredients(ingredients):
    if hasattr(ingredients, "json"):  # If it's a Pydantic object, use the json() method
        return ingredients.json()
    elif isinstance(ingredients, dict):  # If it's already a dictionary, convert to JSON
        return json.dumps(ingredients)
    else:
        raise TypeError("The provided ingredients object is not compatible with JSON serialization")


fakeNormIngredients = FakeNormIngredients(
    peakWindowClippingSize=10,
    smoothingParameter=0.1,
    decreaseParameter=True,
    lss=True,
)

fakeNormIngredients_json = convertIngredients(fakeNormIngredients)

fakeNormAlgo = FakeNormAlgo()
fakeNormAlgo.initialize()
fakeNormAlgo.setPropertyValue("InputWorkspace", outputWS)
fakeNormAlgo.setPropertyValue("Ingredients", fakeNormIngredients_json)
fakeNormAlgo.setPropertyValue("OutputWorkspace", outputWS)
fakeNormAlgo.execute()

assert False

### This following script reflects the use of this new algo "CreateArtificialNormalizationAlgo" within Reduction
import json

## for creating ingredients
from snapred.backend.dao.request.FarmFreshIngredients import FarmFreshIngredients
from snapred.backend.service.SousChef import SousChef
from snapred.backend.data.LocalDataService import LocalDataService
from snapred.backend.dao.ingredients.ArtificialNormalizationIngredients import ArtificialNormalizationIngredients as FakeNormIngredients
## for loading data
from snapred.backend.dao.ingredients.GroceryListItem import GroceryListItem
from snapred.backend.data.GroceryService import GroceryService

## the code to test
from snapred.backend.recipe.ReductionRecipe import ReductionRecipe as Recipe

# for running through service layer
from snapred.backend.dao.request.ReductionRequest import ReductionRequest
from snapred.backend.service.ReductionService import ReductionService
from snapred.backend.recipe.algorithm.FocusSpectraAlgorithm import FocusSpectraAlgorithm as FocusSpec
from snapred.backend.recipe.algorithm.CreateArtificialNormalizationAlgo import CreateArtificialNormalizationAlgo as FakeNormAlgo

from snapred.meta.Config import Config
from pathlib import Path
from snapred.meta.mantid.WorkspaceNameGenerator import WorkspaceNameGenerator as wng, ValueFormatter as wnvf

from mantid.testing import assert_almost_equal as assert_wksp_almost_equal
from mantid.simpleapi import ConvertToMatrixWorkspace

groceryService = GroceryService()

#User input ###########################
runNumber = "46680" #"57482"
isLite = True
Config._config["cis_mode"] = True
version=(1, None)
grouping = "Column"

### PREP INGREDIENTS ################

groups = LocalDataService().readGroupingMap(runNumber).getMap(isLite)

farmFresh = FarmFreshIngredients(
    runNumber=runNumber,
    versions=version,
    useLiteMode=isLite,
    focusGroups=list(groups.values()),
    timestamp=1726848143.8856316,
    keepUnfocused=True,
    convertUnitsTo="TOF",
)

ingredients = SousChef().prepReductionIngredients(farmFresh)

selectedFocusGroup = next(
    (fg for fg in farmFresh.focusGroups if fg.name == grouping), None
)

if selectedFocusGroup:
    print(f"Selected FocusGroup: {selectedFocusGroup}")
    
    updatedFarmFresh = FarmFreshIngredients(
        runNumber=farmFresh.runNumber,
        useLiteMode=farmFresh.useLiteMode,
        focusGroups=[selectedFocusGroup], 
    )
    
    pixelGroup = SousChef().prepPixelGroup(updatedFarmFresh)

# TODO: This probably needs to be a thing:
# ingredients.detectorPeaks = normalizationRecord.detectorPeaks


### FETCH GROCERIES ##################

clerk = GroceryListItem.builder()

for key, group in groups.items():
  clerk.fromRun(runNumber).grouping(group.name).useLiteMode(isLite).add()
groupingWorkspaces = GroceryService().fetchGroceryList(clerk.buildList())
# ...
clerk.name("inputWorkspace").neutron(runNumber).useLiteMode(isLite).add()
clerk.name("diffcalWorkspace").diffcal_table(runNumber, 1).useLiteMode(isLite).add()


groceries = GroceryService().fetchGroceryDict(
    groceryDict=clerk.buildDict()
)
groceries["groupingWorkspaces"] = groupingWorkspaces

rawWs = "tof_all_lite_copy1_046680"
groupingWs = f"SNAPLite_grouping__{grouping}_{runNumber}"

focusSpec =  FocusSpec()
focusSpec.initialize()
focusSpec.setPropertyValue("InputWorkspace", rawWs)
focusSpec.setPropertyValue("OutputWorkspace", rawWs)
focusSpec.setPropertyValue("GroupingWorkspace", groupingWs)
focusSpec.setPropertyValue("Ingredients", pixelGroup.json())
focusSpec.setProperty("RebinOutput", False)
focusSpec.execute()

rawWs = ConvertToMatrixWorkspace(rawWs)

def convertIngredients(ingredients):
    if hasattr(ingredients, "json"):  # If it's a Pydantic object, use the json() method
        return ingredients.json()
    elif isinstance(ingredients, dict):  # If it's already a dictionary, convert to JSON
        return json.dumps(ingredients)
    else:
        raise TypeError("The provided ingredients object is not compatible with JSON serialization")


fakeNormIngredients = FakeNormIngredients(
    peakWindowClippingSize=10,
    smoothingParameter=0.1,
    decreaseParameter=True,
    lss=True,
)

fakeNormIngredients_json = convertIngredients(fakeNormIngredients)

fakeNormAlgo = FakeNormAlgo()
fakeNormAlgo.initialize()
fakeNormAlgo.setProperty("InputWorkspace", rawWs)
fakeNormAlgo.setPropertyValue("Ingredients", fakeNormIngredients_json)
fakeNormAlgo.setProperty("OutputWorkspace", "artificial_Norm_Ws")
fakeNormAlgo.execute()

assert False

Link to EWM item

EWM # 7122

Verification

  • the author has read the EWM story and acceptance critera
  • the reviewer has read the EWM story and acceptance criteria
  • the reviewer certifies the acceptance criteria below reflect the criteria in EWM

Acceptance Criteria

This list is for ease of reference, and does not replace reading the EWM story as part of the review. Verify this list matches the EWM story before reviewing.

  • acceptance criterion 1
  • acceptance criterion 2

@darshdinger darshdinger marked this pull request as ready for review September 25, 2024 18:37
Copy link

codecov bot commented Sep 27, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 95.87%. Comparing base (1756d1f) to head (b8a9c1d).
Report is 76 commits behind head on next.

Additional details and impacted files
@@           Coverage Diff           @@
##             next     #460   +/-   ##
=======================================
  Coverage   95.87%   95.87%           
=======================================
  Files          61       61           
  Lines        4439     4439           
=======================================
  Hits         4256     4256           
  Misses        183      183           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Collaborator

@dlcaballero16 dlcaballero16 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ran CIS tests and worked as expected.

@darshdinger darshdinger merged commit 3fb9e6a into next Sep 27, 2024
7 checks passed
@darshdinger darshdinger deleted the ewm7122-implement-artificial-normalization-algo branch September 27, 2024 18:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants