)]}'
{
  "commit": "9f9df17e265b2c5aea11a95e3e69269d005ac0ae",
  "tree": "d5e20cc453ceb6f7ccea2006479ec90655396027",
  "parents": [
    "6ed54409c6e1a6d6e9f6f9dd22bfd9dd38ce1a7b"
  ],
  "author": {
    "name": "Paul E. McKenney",
    "email": "paulmck@linux.vnet.ibm.com",
    "time": "Tue Dec 10 14:19:27 2013 -0800"
  },
  "committer": {
    "name": "Paul E. McKenney",
    "email": "paulmck@linux.vnet.ibm.com",
    "time": "Fri Dec 13 09:05:13 2013 -0800"
  },
  "message": "powerpc: Full barrier for smp_mb__after_unlock_lock()\n\nThe powerpc lock acquisition sequence is as follows:\n\n\tlwarx; cmpwi; bne; stwcx.; lwsync;\n\nLock release is as follows:\n\n\tlwsync; stw;\n\nIf CPU 0 does a store (say, x\u003d1) then a lock release, and CPU 1 does a\nlock acquisition then a load (say, r1\u003dy), then there is no guarantee of\na full memory barrier between the store to \u0027x\u0027 and the load from \u0027y\u0027.\nTo see this, suppose that CPUs 0 and 1 are hardware threads in the same\ncore that share a store buffer, and that CPU 2 is in some other core,\nand that CPU 2 does the following:\n\n\ty \u003d 1; sync; r2 \u003d x;\n\nIf \u0027x\u0027 and \u0027y\u0027 are both initially zero, then the lock acquisition and\nrelease sequences above can result in r1 and r2 both being equal to\nzero, which could not happen if unlock+lock was a full barrier.\n\nThis commit therefore makes powerpc\u0027s smp_mb__after_unlock_lock() be a\nfull barrier.\n\nSigned-off-by: Paul E. McKenney \u003cpaulmck@linux.vnet.ibm.com\u003e\nAcked-by: Benjamin Herrenschmidt \u003cbenh@kernel.crashing.org\u003e\nCc: Paul Mackerras \u003cpaulus@samba.org\u003e\nCc: linuxppc-dev@lists.ozlabs.org\n",
  "tree_diff": [
    {
      "type": "modify",
      "old_id": "5f54a744dcc5e26921ddafe1d267985f71dd8540",
      "old_mode": 33188,
      "old_path": "arch/powerpc/include/asm/spinlock.h",
      "new_id": "f6e78d63fb6accd584e71cd5ebe7b26c5dae2916",
      "new_mode": 33188,
      "new_path": "arch/powerpc/include/asm/spinlock.h"
    }
  ]
}
