Backtracking is a general algorithm for finding all (or some) solutions to some computational problems, notably constraint satisfaction problems, that incrementally builds candidates to the solutions, and abandons a candidate (“backtracks”) as soon as it determines that the candidate cannot possibly be completed to a valid solution.
Depth-first search (DFS) is an algorithm for traversing or searching tree or graph data structures. The algorithm starts at the root node (selecting some arbitrary node as the root node in the case of a graph) and explores as far as possible along each branch before backtracking.
For me, the difference between backtracking and DFS is that backtracking handles an implicit tree and DFS deals with an explicit one. This seems trivial, but it means a lot. When the search space of a problem is visited by backtracking, the implicit tree gets traversed and pruned in the middle of it. Yet for DFS, the tree/graph it deals with is explicitly constructed and unacceptable cases have already been thrown, i.e. pruned, away before any search is done.
So, backtracking is DFS for implicit tree, while DFS is backtracking without pruning.
比如,使用回溯法常见的一类问题就是排列组合问题。例如经典的,使用 n 对括号,能排列出多少种合法的括号串(括号必须配对)。这是经典的需要回溯法解决的问题,你能把这个问题想象成图么?不管如何,如果你能把这个问题写成一个递归函数来解决,至少在递归深入到下一层和回来的过程,都绘制成节点和边的话,这个可视化出来的东西,就是一棵树(树是图的特例)。
courses = sorted([c for c in courses if c[0] <= c[1]], key = functools.cmp_to_key(cmp)) ifnot courses: return0 days = 0 count = 0 q = []
for c in courses: if days + c[0] <= c[1]: days += c[0] count += 1 heapq.heappush(q, -c[0]) elif q and -q[0] > c[0]: days += q[0] + c[0] heapq.heappop(q) heapq.heappush(q, -c[0]) return count
defwordBreak(self, s: str, wordDict: List[str]) -> bool: n = len(s) defdfs(st: int): if st == n: returnTrue for w in wordDict: if s.startswith(w, st): res = dfs(st + len(w)) if res: returnTrue returnFalse return dfs(0)
设计一个递归算法的方法,其实就是假设我们已经设计出了这个算法。
假设,我们得到了一个算法,这个算法可以判定目标串从 st 开始的后缀能不能被字典拆分。显然,从 len(s) 也就是目标串的长度开始的后缀,长度是 0 ,一定是可以被拆分的。剩下,我们只要遍历字典,去掉一个以字典中的词前缀后,用递归方法判定后缀就可以了。
从回溯法出发的思路也可以解释一下,就是从目标串的 st 下标开始搜索,如果可以成功扩展一个前缀,就深入一层,从 st + len(word) 这个下标开始,用相同的方法继续向前探索,如果探索失败就回退到 st,如果探索成功就直接返回。听起来是不是也有点像深度优先搜索呢?所以我总喜欢用 dfs 来命名回溯的方法名。
考虑到我们设计的这个算法很很巧妙,只有一个标量参数,也就是开始下标。而且,显然这个 st 是会被重复访问到的。比如 babccc 这个目标串,测试字典是 [a, b, ba, ab, c],其前缀 bab,对应字典前几个词,可以有多种方法,所以,显然 ccc 这个后缀会反复被搜索到,所以如果我们缓存了后,就会大量节省搜索时间。我可以用 st 作 key,缓存计算结果。
1 2 3 4 5 6 7 8 9 10 11 12 13
defwordBreak(self, s: str, wordDict: List[str]) -> bool: n = len(s)
@cache #注意这里 defdfs(st: int): if st == n: returnTrue for w in wordDict: if s.startswith(w, st): res = dfs(st + len(w)) if res: returnTrue returnFalse return dfs(0)
Python 就是这么变态,只要在方法名前面加个修饰 @cache 就可以了,这个方法的值会自动调用 lru_cache。如此,就不会超时了(TLE)。加过缓存后,这个算法的时间复杂度就会大幅下降,目标串的每个 st 下标位置会缓存一个结果。然后就是对字典的遍历。
if x - 1 < 0: returnFalse if x + 1 >= row: returnFalse if y - 1 < 0: returnFalse if y + 1 >= col: returnFalse
res = True if visited[x-1][y] == 0: res = res and _rainDrop(x - 1, y) if visited[x+1][y] == 0: res = res and _rainDrop(x + 1, y) if visited[x][y-1] == 0: res = res and _rainDrop(x, y - 1) if visited[x][y+1] == 0: res = res and _rainDrop(x, y + 1)
defdecodeString(self, s: str) -> str: res = '' count = 0 p = '' state = 'print' for c in s: if state == 'print'andord('a') <= ord(c) <= ord('z'): res += c eliford('1') <= ord(c) <= ord('9'): count = count * 10 + int(c) elif c == '[': state = 'start' elif c == ']': res += p * count count, p = 0, '' state = 'print' elif state == 'start'andord('a') <= ord(c) <= ord('z'): p += c return res