Skip to content

Mixed-size accesses #840

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 112 commits into
base: development
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 106 commits
Commits
Show all changes
112 commits
Select commit Hold shift + click to select a range
3f3c9b6
AliasAnalysis.mayMix(MemoryCoreEvent,MemoryCoreEvent)
xeren Feb 5, 2025
8ca166c
Tearing
xeren Feb 5, 2025
cb4a966
AnalysisTest.allKindsOfMixedSizeAccesses()
xeren Feb 7, 2025
bba372c
fixup! AliasAnalysis.mayMix(MemoryCoreEvent,MemoryCoreEvent)
xeren Feb 7, 2025
9e5d5bd
fixup! Tearing
xeren Feb 7, 2025
b24a306
TransactionMarker
xeren Feb 7, 2025
d9a8288
SameTransaction
xeren Feb 7, 2025
22700f1
teared events inherit tags from their original
xeren Feb 7, 2025
67eae35
Track RMW after Tearing
xeren Feb 7, 2025
343e786
Memory.isBigEndian()
xeren Feb 10, 2025
190dc6b
Big Endian Tearing
xeren Feb 10, 2025
706c464
Missing update of MemoryEvent.getAccessType() in Load and Store
xeren Feb 11, 2025
5867d30
Remove inter-transaction program order
xeren Feb 11, 2025
6953d44
fixup! SameTransaction
xeren Feb 11, 2025
6dcfaa3
fixup! Track RMW after Tearing
xeren Feb 11, 2025
8502499
More precise post-processing in InclusionBasedPointerAnalysis
xeren Feb 11, 2025
8f38963
Remove unused array initializers in litmus tests
xeren Feb 11, 2025
1538560
fixup! Tearing
xeren Feb 11, 2025
2ab097a
Extend litmus grammar
xeren Feb 12, 2025
62c2a9d
Detailed MSA information
xeren Feb 13, 2025
3ff69de
Fix address arithmetics in tests from base i8 to i64, because the tes…
xeren Feb 24, 2025
4ecbb32
Fix Mixed-size access detection in InclusionBasedPointerAnalysis
xeren Feb 24, 2025
74e4964
Fix InclusionBasedPointerAnalysis.mayMixedSizeAccesses(MemoryCoreEven…
xeren Feb 24, 2025
4ae26ab
ExpressionFactory.makeIntConcat(Expression,Expression)
xeren Feb 24, 2025
9c48301
32-bit registers in VisitorLitmusAArch64
xeren Feb 24, 2025
9b0f1a9
Fix teared stores always being 8-bit
xeren Feb 25, 2025
e08de66
Add support for 8-, 16- and 32-bit operations in AArch64 litmus tests
xeren Feb 25, 2025
44b095e
Refactor
xeren Feb 25, 2025
74e3033
Program.addInit(MemoryObject,int)
xeren Feb 25, 2025
448bfe5
Tear initializations
xeren Feb 25, 2025
1294220
LitmusAARCH64MSATest
xeren Feb 12, 2025
9587591
cat and test adjustments
xeren Feb 13, 2025
4d93091
fixup! Tear initializations
xeren Feb 26, 2025
482b446
Different cat model
xeren Feb 26, 2025
c7322ed
TODO https://diy.inria.fr/mixed/mixed-aarch64/hardware.html#new
xeren Feb 28, 2025
2f106f8
Tearing of final values
xeren Mar 17, 2025
0a1c7d2
SameInstructionGraph
xeren Mar 17, 2025
6c2156a
Litmus test collection
xeren Mar 19, 2025
7053eea
fixup! Add support for 8-, 16- and 32-bit operations in AArch64 litmu…
xeren Mar 19, 2025
7dff4d3
move aarch64-mixed.cat
xeren Mar 20, 2025
cac9850
typed declarations in AARCH64 litmus
xeren Mar 20, 2025
1dd0fe5
LDP,STP
xeren Mar 20, 2025
141c3ef
Fix LL/SC and more
xeren Mar 20, 2025
bc72b38
Added SWP(AL) operations to ARMv8 Litmus
ThomasHaas Mar 20, 2025
934ee5c
Added Xchg event
ThomasHaas Mar 20, 2025
7e24dbc
Option program.analysis.mixedSize
xeren Mar 21, 2025
9a6f119
Ignore some MSA tests with too long in CI
xeren Mar 25, 2025
224ba24
Fixed ThreadCreation initializing MemoryObjects with wrong types
ThomasHaas Mar 26, 2025
03917b4
Add logging to Tearing.
ThomasHaas Mar 26, 2025
9094f8a
Added IntConcat and IntExtract expressions that are used in Tearing
ThomasHaas Mar 26, 2025
fede31e
More precision w.r.t. ITE in InclusionBasedPointerAnalysis
xeren Aug 24, 2023
7ea8ac1
More precision w.r.t. aggregates in InclusionBasedPointerAnalysis
xeren Mar 26, 2025
08e3f24
Merge remote-tracking branch 'refs/remotes/origin/development' into m…
xeren Mar 27, 2025
1299350
fixup! Merge remote-tracking branch 'refs/remotes/origin/development'…
xeren Mar 27, 2025
44bc911
Updated aarch64-mixed-new.cat
ThomasHaas Mar 27, 2025
d6a81ba
Merge branch 'refs/heads/development' into mixed-sized-accesses
ThomasHaas Mar 27, 2025
4dfff98
Fixed internal representation for IntExtract to have inclusive high-bit.
ThomasHaas Mar 27, 2025
e320c61
Merge branch 'refs/heads/add_bv_extract&concat' into mixed-sized-acce…
ThomasHaas Mar 27, 2025
d8407b7
Fixup
ThomasHaas Mar 27, 2025
798070d
Added TODO
ThomasHaas Mar 27, 2025
09cc562
Refinement now constructs its model of rmw directly from the encoding.
ThomasHaas Mar 28, 2025
aade24b
Opt-in MSA support
xeren Mar 28, 2025
a78c6b4
Insert Tearing into ProcessingManager
xeren Mar 28, 2025
0cd1933
lockref benchmarks
xeren Mar 28, 2025
3aee499
rename InstructionBoundary
xeren Mar 28, 2025
11b2b57
Better MemCpy handling
xeren Mar 28, 2025
426dd6b
Option program.processing.mixedSize
xeren Mar 28, 2025
466f4f9
Fix AnalysisTest
xeren Mar 28, 2025
85bba4b
Keep bytewise MemCpy without MSA
xeren Mar 28, 2025
08c9d84
Merge branch 'refs/heads/development' into mixed-sized-accesses
ThomasHaas Mar 31, 2025
e83f759
Streamline lockref versions 1 and 2.
ThomasHaas Mar 31, 2025
7fbb9c4
Fix MixedTest
xeren Apr 3, 2025
6f9e294
Fix SpirvRacesTest
xeren Apr 4, 2025
649b39e
Add amo, lxsx
xeren Apr 7, 2025
1469d80
Move aarch64-mixed-new.cat into aarch64.cat
xeren Apr 9, 2025
0ecc83c
Missing AARCH64-MIXED tests
xeren Apr 10, 2025
9b2059b
fixup! Add amo, lxsx
xeren Apr 10, 2025
855c091
fixup! Move aarch64-mixed-new.cat into aarch64.cat
xeren Apr 10, 2025
b2b3cb7
Merge remote-tracking branch 'refs/remotes/origin/development' into m…
xeren Apr 10, 2025
59b7146
Missing unaligned tests
xeren Apr 10, 2025
0d1c1f5
Proper lxsx in NativeRelationAnalysis
xeren Apr 10, 2025
901638e
Merge remote-tracking branch 'refs/remotes/origin/development' into m…
xeren Apr 10, 2025
da092aa
fixup! Merge remote-tracking branch 'refs/remotes/origin/development'…
xeren Apr 11, 2025
c8a3b5f
fixup! Merge remote-tracking branch 'refs/remotes/origin/development'…
xeren Apr 11, 2025
ccabb0e
Ignore AARCH64-MIXED tests with OutOfMemory
xeren Apr 11, 2025
ad97839
fixup! Ignore AARCH64-MIXED tests with OutOfMemory
xeren Apr 11, 2025
627af2f
Merge branch 'refs/heads/development' into mixed-sized-accesses
ThomasHaas Apr 11, 2025
c1a4bd2
Better lockref benchmarks
hernan-poncedeleon Apr 18, 2025
052a7eb
Enable new lockref tests in CI
hernan-poncedeleon Apr 18, 2025
e8eabb3
Feedback implemented
hernan-poncedeleon Apr 18, 2025
121dd3b
Arm instructions SWPB, SWPH
xeren Apr 23, 2025
e78c244
Merge remote-tracking branch 'origin/development' into mixed-sized-ac…
xeren Apr 23, 2025
7f8fd7e
Special visualization of si
xeren Apr 24, 2025
66fc931
Remove Tag.Armv8.MO_RX
xeren Apr 25, 2025
1ba3e2e
Merge remote-tracking branch 'refs/remotes/origin/development' into m…
xeren Apr 25, 2025
bbd044e
Merge branch 'refs/heads/development' into mixed-sized-accesses
ThomasHaas May 1, 2025
e09212d
Add SameInstruction relation to LazyRelationAnalysis and LazyEncodeSets
ThomasHaas May 1, 2025
bc92d9c
Implement Feedback
xeren May 8, 2025
d84014a
Merge remote-tracking branch 'refs/remotes/origin/development' into m…
xeren May 8, 2025
38f1f97
fixup! Implement Feedback
xeren May 9, 2025
eef0fb9
Merge remote-tracking branch 'refs/remotes/origin/development' into m…
xeren May 9, 2025
ed68c9f
fixup! Implement Feedback
xeren May 9, 2025
049a056
mixed-local.c
xeren May 9, 2025
8c9b8bb
MSA-aware MemToReg
xeren May 12, 2025
7a07ffb
fixup! MSA-aware MemToReg
xeren May 13, 2025
a6a68e8
Merge remote-tracking branch 'refs/remotes/origin/development' into m…
xeren May 13, 2025
2567cb4
Better naming of MemToReg registers
xeren May 13, 2025
8e2eb77
Merge remote-tracking branch 'refs/remotes/origin/development' into m…
xeren May 13, 2025
06da103
Tune Non-MSA AArch64 litmus tests
xeren May 13, 2025
12b5f85
Merge branch 'refs/heads/development' into mixed-sized-accesses
ThomasHaas May 14, 2025
19dcc9b
Fixup after merge
ThomasHaas May 14, 2025
3d3cf5b
Add back accidentally removed encoding method.
ThomasHaas May 14, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
48 changes: 48 additions & 0 deletions benchmarks/mixed/lockref-par1.c
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
#include <stdatomic.h>
#include <pthread.h>
#include <assert.h>
#include "lockref.h"

#ifndef NTHREADS
#define NTHREADS 2
#endif

struct lockref my_lockref;

void *get(void *unused) {

lockref_get(&my_lockref);

return NULL;
}

void *ret(void *unused) {

lockref_put_return(&my_lockref);

return NULL;
}

int main() {

pthread_t g[NTHREADS];
pthread_t r[NTHREADS];

atomic_store(&my_lockref.lock_count, 0);

for (int i = 0; i < NTHREADS; i++)
pthread_create(&g[i], 0, get, (void *)(size_t)i);
for (int i = 0; i < NTHREADS; i++)
pthread_create(&r[i], 0, ret, (void *)(size_t)i);

for (int i = 0; i < NTHREADS; i++)
pthread_join(g[i], 0);
assert(my_lockref.count >= 0);
assert(my_lockref.count <= NTHREADS);

for (int i = 0; i < NTHREADS; i++)
pthread_join(r[i], 0);
assert(my_lockref.count == 0); // Wrong since some decrement might have no effect if there is no matching increment

return 0;
}
47 changes: 47 additions & 0 deletions benchmarks/mixed/lockref-par2.c
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
#include <stdatomic.h>
#include <pthread.h>
#include <assert.h>
#include "lockref.h"

#ifndef NTHREADS
#define NTHREADS 2
#endif

struct lockref my_lockref;

void *get(void *unused) {

lockref_get(&my_lockref);

return NULL;
}

void *ret(void *unused) {

lockref_put_return(&my_lockref);

return NULL;
}

int main() {

pthread_t g[NTHREADS];
pthread_t r[NTHREADS];

atomic_store(&my_lockref.lock_count, 0);

for (int i = 0; i < NTHREADS; i++)
pthread_create(&g[i], 0, get, (void *)(size_t)i);
for (int i = 0; i < NTHREADS; i++)
pthread_create(&r[i], 0, ret, (void *)(size_t)i);

for (int i = 0; i < NTHREADS; i++)
pthread_join(g[i], 0);
assert(my_lockref.count >= 0); // Correct since decrements fail if counter is zero
assert(my_lockref.count <= NTHREADS); // Correct because if all increments come first we get == NTHREADS

for (int i = 0; i < NTHREADS; i++)
pthread_join(r[i], 0);

return 0;
}
46 changes: 46 additions & 0 deletions benchmarks/mixed/lockref-par3.c
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
#include <stdatomic.h>
#include <pthread.h>
#include <assert.h>
#include "lockref.h"

#ifndef NTHREADS
#define NTHREADS 2
#endif

struct lockref my_lockref;

void *get(void *unused) {

lockref_get(&my_lockref);

return NULL;
}

void *ret(void *unused) {

lockref_put_return(&my_lockref);

return NULL;
}

int main() {

pthread_t g[NTHREADS];
pthread_t r[NTHREADS];

atomic_store(&my_lockref.lock_count, 0);

for (int i = 0; i < NTHREADS; i++)
pthread_create(&g[i], 0, get, (void *)(size_t)i);
for (int i = 0; i < NTHREADS; i++)
pthread_create(&r[i], 0, ret, (void *)(size_t)i);

for (int i = 0; i < NTHREADS; i++)
pthread_join(g[i], 0);
assert(my_lockref.count == NTHREADS); // Wrong, increment and decrement can be interleaved

for (int i = 0; i < NTHREADS; i++)
pthread_join(r[i], 0);

return 0;
}
46 changes: 46 additions & 0 deletions benchmarks/mixed/lockref-seq.c
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
#include <stdatomic.h>
#include <pthread.h>
#include <assert.h>
#include "lockref.h"

#ifndef NTHREADS
#define NTHREADS 2
#endif

struct lockref my_lockref;

void *get(void *unused) {

lockref_get(&my_lockref);

return NULL;
}

void *ret(void *unused) {

lockref_put_return(&my_lockref);

return NULL;
}

int main() {

pthread_t g[NTHREADS];
pthread_t r[NTHREADS];

atomic_store(&my_lockref.lock_count, 0);

for (int i = 0; i < NTHREADS; i++)
pthread_create(&g[i], 0, get, (void *)(size_t)i);
for (int i = 0; i < NTHREADS; i++)
pthread_join(g[i], 0);
assert(my_lockref.count == NTHREADS); // Correct since all get first

for (int i = 0; i < NTHREADS; i++)
pthread_create(&r[i], 0, ret, (void *)(size_t)i);
for (int i = 0; i < NTHREADS; i++)
pthread_join(r[i], 0);
assert(my_lockref.count == 0); // Correct since every decrement has a matching increment

return 0;
}
67 changes: 67 additions & 0 deletions benchmarks/mixed/lockref.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
#include <stdatomic.h>
#include <pthread.h>
#include <assert.h>
#include "spinlock.c"

#define CMPXCHG_LOOP(CODE, SUCCESS) do { \
struct lockref old; \
old.lock_count = atomic_load(&lockref->lock_count); \
while (old.lock.lock == 0) { \
struct lockref new = old; \
CODE \
if (atomic_compare_exchange_strong(&lockref->lock_count, \
(int64_t *) &old.lock_count, \
new.lock_count)) { \
SUCCESS; \
} \
} \
} while (0)

struct lockref {
union {
_Atomic(int64_t) lock_count;
struct {
spinlock_t lock;
_Atomic(int32_t) count;
};
};
};

/**
* lockref_get - Increments reference count unconditionally
* @lockref: pointer to lockref structure
*
* This operation is only valid if you already hold a reference
* to the object, so you know the count cannot be zero.
*/
void lockref_get(struct lockref *lockref)
{
CMPXCHG_LOOP(
new.count++;
,
return;
);

spin_lock(&lockref->lock);
lockref->count++;
spin_unlock(&lockref->lock);
}

/**
* lockref_put_return - Decrement reference count if possible
* @lockref: pointer to lockref structure
*
* Decrement the reference count and return the new value.
* If the lockref was dead or locked, return an error.
*/
int lockref_put_return(struct lockref *lockref)
{
CMPXCHG_LOOP(
new.count--;
if (old.count <= 0)
return -1;
,
return new.count;
);
return -1;
}
103 changes: 103 additions & 0 deletions benchmarks/mixed/lockref1.c
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
#include <stdatomic.h>
#include <pthread.h>
#include <assert.h>


#ifndef NTHREADS
#define NTHREADS 3
#endif

// ==========================================
// Spinlock
// ==========================================

struct spinlock_s {
_Atomic(int32_t) lock;
};
typedef struct spinlock_s spinlock_t;

void await_for_lock(struct spinlock_s *l)
{
while (atomic_load_explicit(&l->lock, memory_order_relaxed) != 0);
return;
}

int try_get_lock(struct spinlock_s *l)
{
int val = 0;
return atomic_compare_exchange_strong_explicit(&l->lock, &val, 1, memory_order_acquire, memory_order_acquire);
}

void spin_lock(struct spinlock_s *l)
{
do {
await_for_lock(l);
} while(!try_get_lock(l));
return;
}

void spin_unlock(struct spinlock_s *l)
{
atomic_store_explicit(&l->lock, 0, memory_order_release);
}

// ==========================================
// Lockref
// ==========================================
Comment on lines +44 to +46

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see the point of keeping both variants (specially since according to Thomas both generated the same dat3m IR). This just make it more confusing for people trying to navigate over the benchmarks in the repo with no clear benefit.

I added another method for the API, refactor the code and created some more interesting examples. I would simply remove lockref1 and lockrref2.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The two versions are very different in terms of LLVM IR. The fact that we create (almost) the same internal IR is not obvious which makes it a valuable benchmark to evaluate our processing pipeline.
I don't think we need the two versions as unit tests, but I would like to keep them around for manual testing, i.e., inspection of logging outputs. We could have a seperate directory for such benchmarks.


typedef struct {
union {
struct {
atomic_int lock;
int count;
};
atomic_long lock_count;
};
} lockref_t;

void lockref_get(lockref_t *lockref) {
long old_val = atomic_load_explicit(&lockref->lock_count, memory_order_relaxed);

while (((lockref_t *)&old_val)->lock == 0) {
long new_val = old_val;
((lockref_t *)&new_val)->count++;
if (atomic_compare_exchange_strong_explicit(
&lockref->lock_count, &old_val, new_val,
memory_order_relaxed, memory_order_relaxed)) {
return;
}
}

spin_lock(&lockref->lock);
lockref->count++;
spin_unlock(&lockref->lock);
}

// ==========================================
// Main
// ==========================================

lockref_t my_lockref;

void *thread_n(void *unsued) {

lockref_get(&my_lockref);

return NULL;
}

int main() {

pthread_t t[NTHREADS];

atomic_store(&my_lockref.lock_count, 0);

for (int i = 0; i < NTHREADS; i++)
pthread_create(&t[i], 0, thread_n, (void *)(size_t)i);

for (int i = 0; i < NTHREADS; i++)
pthread_join(t[i], 0);

assert(my_lockref.count == NTHREADS);
return 0;
}
Loading