You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm running into some small hiccups in #1633 with regards to context-aware dialect conversion and this issue is to record my idea for how to improve it.
Problem 1: each conversion pattern must persist layout information on the last op it creates, since that attribute is used by the framework for later type conversion. If a pattern also needs that layout for its implementation, then the type of the adatpor operand and the attribute associated with that operand will disagree (e.g., a materialized rank-1 type with a layout that assumes a rank-2 input tensor).
Problem 2: I am forced to remap values in the conversion framework because the original values may have lost their parent op (e.g., if it's a block argument, the parent op can be detached from its block, and hence you can't do the attribute lookup if it's an argattr of the func or an operandattr of a secret.generic). So I can't just have a conversion pattern implementer understand to use the attr lookup on the op.operand for the layout and data semantic type, while using the adaptor.operand for the ciphertext semantic type. This lack of a block also prevents me from using the original operand for type conversion.
Idea: instead of having the type conversion step use attributes at runtime, have it parse the attributes into a DenseMap (say, at construction time) and then use that DenseMap for lookups. Since there's no IR traversal, you can still have the (disconnected) SSA values as map keys (fixes problem 2), and so you don't need to remap values to enable type conversion. Second, since the layout information is now accessed via the op operand (not the adaptor operand), you don't need to persist the layouts on ops created by a kernel (fixing problem 1).
So basically, the initial attribute context will be frozen at the pass start time, and queried for both type conversion and kernel implementation via an op's original SSA values.
The text was updated successfully, but these errors were encountered:
j2kun
changed the title
Rethink attribute-based context in context-aware dialect conversion
Rethink live-queried attribute context in context-aware dialect conversion
Apr 5, 2025
I'm running into some small hiccups in #1633 with regards to context-aware dialect conversion and this issue is to record my idea for how to improve it.
Problem 1: each conversion pattern must persist layout information on the last op it creates, since that attribute is used by the framework for later type conversion. If a pattern also needs that layout for its implementation, then the type of the adatpor operand and the attribute associated with that operand will disagree (e.g., a materialized rank-1 type with a layout that assumes a rank-2 input tensor).
Problem 2: I am forced to remap values in the conversion framework because the original values may have lost their parent op (e.g., if it's a block argument, the parent op can be detached from its block, and hence you can't do the attribute lookup if it's an argattr of the func or an operandattr of a secret.generic). So I can't just have a conversion pattern implementer understand to use the attr lookup on the op.operand for the layout and data semantic type, while using the adaptor.operand for the ciphertext semantic type. This lack of a block also prevents me from using the original operand for type conversion.
Idea: instead of having the type conversion step use attributes at runtime, have it parse the attributes into a DenseMap (say, at construction time) and then use that DenseMap for lookups. Since there's no IR traversal, you can still have the (disconnected) SSA values as map keys (fixes problem 2), and so you don't need to remap values to enable type conversion. Second, since the layout information is now accessed via the op operand (not the adaptor operand), you don't need to persist the layouts on ops created by a kernel (fixing problem 1).
So basically, the initial attribute context will be frozen at the pass start time, and queried for both type conversion and kernel implementation via an op's original SSA values.
The text was updated successfully, but these errors were encountered: