#P13434. [GCJ 2009 #1B] Decision Tree

    ID: 15308 远端评测题 3000ms 1024MiB 尝试: 0 已通过: 0 难度: 4 上传者: 标签>树形数据结构2009Special Judge深度优先搜索 DFSGoogle Code Jam

[GCJ 2009 #1B] Decision Tree

题目描述

Decision trees -- in particular, a type called classification trees -- are data structures that are used to classify items into categories using features of those items. For example, each animal is either "cute" or not. For any given animal, we can decide whether it is cute by looking at the animal's features and using the following decision tree.

(0.2 furry
  (0.81 fast
    (0.3)
    (0.2)
  )
  (0.1 fishy
    (0.3 freshwater
      (0.01)
      (0.01)
    )
    (0.1)
  )
)

A decision tree is defined recursively. It always has a root node and a weight. It also, optionally, has a feature name and two sub-trees, which are themselves decision trees.

More formally, a decision tree is defined using the following grammar.

tree ::= (weight [feature tree tree])
weight is a real number between 0 and 1, inclusive
feature is a string of 1 or more lower case English letters

The part inside the square brackets, [], is optional. The parentheses, (), weight and feature are tokens. There will be at least one whitespace character between any two tokens, except (possibly) after an open-bracket '(' or before a close-bracket ')'. Whitespace characters are space (' ') and endline ('\n').

To figure out how likely the animal is to be cute, we start at the root of the tree with probability pp set to 1. At each node, we multiply pp by the weight of the node. If the node is a leaf (has no sub-trees), then we stop, and the value of pp is the probability that our animal is cute. Otherwise, we look at the feature associated with the node. If our animal has this feature, we move down into the first sub-tree and continue recursively. If it does not have this feature, then we move down into the second sub-tree and continue in the same way.

For example, a beaver is an animal that has two features: furry and freshwater. We start at the root with pp equal to 11. We multiply pp by 0.20.2, the weight of the root and move into the first sub-tree because the beaver has the furry feature. There, we multiply pp by 0.810.81, which makes pp equal to 0.1620.162. From there we move further down into the second sub-tree because the beaver does not have the fast feature. Finally, we multiply pp by 0.20.2 and end up with 0.03240.0324 -- the probability that the beaver is cute.

You will be given a decision tree and a list of animals with their features. For each item, you need to return the probability that the animal is cute.

输入格式

The first line of input contains a single integer, NN, the number of test cases. NN test cases follow.

Each test case description will start with a line that contains an integer LL -- the number of lines that describe a decision tree. The next LL lines will contain a decision tree in the format described above. The line after that will contain AA -- the number of animals. The next AA lines will each contain the description of one animal in the following format.

$\text{animal}\ n\ \text{feature}_1 \ \text{feature}_2 \ \dots \text{feature}_n$

输出格式

For each test case, output one line containing "Case #x:" followed by exactly AA lines, one per animal, in the same order as they appear in the input. Each line should contain the probability that the animal is cute. Answers that are precise to within an absolute or relative error of 10610^{-6} will be considered correct.

1
3
(0.5 cool
  ( 1.000)
  (0.5 ))
2
anteater 1 cool
cockroach 0
Case #1:
0.5000000
0.2500000

提示

Limits

  • 1N1001 \leq N \leq 100
  • All weights will be between 0 and 1, inclusive.
  • All weights will consist of only digits with at most one decimal point.
  • The weights will not start or end with a decimal point.
  • The weights will not have more than one 0 before a decimal point.
  • All animals and features will consist of between 1 and 10 lower case English letters.
  • All animal names within a test case will be distinct.
  • All feature names for a single animal will be distinct.
  • Each of the LL lines in a decision tree definition will have at most 80 characters, not including the endlines.

Small dataset(10 Pts)

  • 1L101 \leq L \leq 10
  • 1A101 \leq A \leq 10
  • 0n50 \leq n \leq 5

Large dataset(11 Pts)

  • 1L1001 \leq L \leq 100
  • 1A1001 \leq A \leq 100
  • 0n1000 \leq n \leq 100