Solving problem is about exposing yourself to as many situations as possible like Find the similarity metric between two strings and practice these strategies over and over. With time, it becomes second nature and a natural way you approach any problems in general. Big or small, always start with a plan, use other strategies mentioned here till you are confident and ready to code the solution.
In this post, my aim is to share an overview the topic about Find the similarity metric between two strings, which can be followed any time. Take easy to follow this discuss.
How do I get the probability of a string being similar to another string in Python?
I want to get a decimal value like 0.9 (meaning 90%) etc. Preferably with standard Python and library.
similar("Apple","Appel") #would have a high prob. similar("Apple","Mango") #would have a lower prob.
There is a built in.
from difflib import SequenceMatcher def similar(a, b): return SequenceMatcher(None, a, b).ratio()
"Apple","Appel") 0.8 similar("Apple","Mango") 0.0similar(
I think maybe you are looking for an algorithm describing the distance between strings. Here are some you may refer to:
Solution #1: Python builtin
native python library, no need extra package.
cons: too limited, there are so many other good algorithms for string similarity out there.
from difflib import SequenceMatcher s = SequenceMatcher(None, "abcd", "bcde") s.ratio() 0.75
Solution #2: jellyfish library
its a very good library with good coverage and few issues.
– Levenshtein Distance
– Damerau-Levenshtein Distance
– Jaro Distance
– Jaro-Winkler Distance
– Match Rating Approach Comparison
– Hamming Distance
easy to use, gamut of supported algorithms, tested.
cons: not native library.
import jellyfish jellyfish.levenshtein_distance(u'jellyfish', u'smellyfish') 2 jellyfish.jaro_distance(u'jellyfish', u'smellyfish') 0.89629629629629637 jellyfish.damerau_levenshtein_distance(u'jellyfish', u'jellyfihs') 1
Fuzzy Wuzzy is a package that implements Levenshtein distance in python, with some helper functions to help in certain situations where you may want two distinct strings to be considered identical. For example:
"fuzzy wuzzy was a bear", "wuzzy fuzzy was a bear") 91 fuzz.token_sort_ratio("fuzzy wuzzy was a bear", "wuzzy fuzzy was a bear") 100fuzz.ratio(
You can create a function like:
def similar(w1, w2): w1 = w1 + ' ' * (len(w2) - len(w1)) w2 = w2 + ' ' * (len(w1) - len(w2)) return sum(1 if i == j else 0 for i, j in zip(w1, w2)) / float(len(w1))
Package distance includes Levenshtein distance:
import distance distance.levenshtein("lenvestein", "levenshtein") # 3
difflib.SequenceMatcher only finds the longest contiguous matching subsequence, this is often not what is desired, for example:
"Apple" a2 = "Appel" a1 *= 50 a2 *= 50 SequenceMatcher(None, a1, a2).ratio() 0.012 # very low SequenceMatcher(None, a1, a2).get_matching_blocks() [Match(a=0, b=0, size=3), Match(a=250, b=250, size=0)] # only the first block is recordeda1 =
Finding the similarity between two strings is closely related to the concept of pairwise sequence alignment in bioinformatics. There are many dedicated libraries for this including biopython. This example implements the Needleman Wunsch algorithm:
from Bio.Align import PairwiseAligner aligner = PairwiseAligner() aligner.score(a1, a2) 200.0 aligner.algorithm 'Needleman-Wunsch'
Using biopython or another bioinformatics package is more flexible than any part of the python standard library since many different scoring schemes and algorithms are available. Also, you can actually get the matching sequences to visualise what is happening:
next(aligner.align(a1, a2)) alignment.score 200.0 print(alignment) Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple- |||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|- App-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elalignment =
SequenceMatcher is very slow on large input, here’s how it can be done with diff-match-patch:
from diff_match_patch import diff_match_patch def compute_similarity_and_diff(text1, text2): dmp = diff_match_patch() dmp.Diff_Timeout = 0.0 diff = dmp.diff_main(text1, text2, False) # similarity common_text = sum([len(txt) for op, txt in diff if op == 0]) text_length = max(len(text1), len(text2)) sim = common_text / text_length return sim, diff