Regarding your question how to "measure the importance and centrality of different words in a text":
The textacy library has several algorithms for extracting key terms easily. The current state of the art algorithm for this task seems to be YAKE and textacy has an easy to use implementation for it. Note that textacy is built on top of spacy, so it would work something like this:
from textacy.ke import yake
import spacy
import wikipedia
nlp = spacy.load("en_core_web_sm")
text = wikipedia.page("Emmanuel Macron").content # some string to extract key terms from
doc = nlp(text)
keywords = yake(doc, normalize="lemma", include_pos=["NOUN", "PROPN", "ADJ"], window_size=5, topn=10)
Regarding your final goal to "explain why similar cases [texts] on a certain topic are decided differently in two different jurisdictions":
I'm not sure if keyword extraction is going to help you much for this. If I understand correctly you have two corpora: corpus A with a group of text with known outcome X and corpus B with known outcome Y. You can apply keyword extraction to both corpora separately, but this will just return the words that are most central to the respective corpus. It will not tell you which keywords are most exclusive to corpus A compared to corpus B. (Maybe you can interpret the keywords qualitatively and you might gain some insights qualitatively)
One alternative might be topic modeling. A topic model tells you which words are most exclusive to one topic compared to another (a topic is defined as a group of words that often occur together within and across texts). You could combine your two corpora A and B, run topic modeling on the combined corpus and hope that there are certain topics (word combinations) that are correlated with outcome X or Y.
The best library in Python for topic modeling is Gensim (but I'm less familiar with it and I have the impression that topic modeling libraries in R are more comprehensive than in Python).