开发者

Table transformation / field parsing in PL/SQL

开发者 https://www.devze.com 2022-12-28 13:32 出处:网络
I have de-normalized table, something like CODES ID| VALUE 10| A,B,C 11| A,B 12| A,B,C,D,E,F 13| R,T,D,W,W,W,W,W,S,S

I have de-normalized table, something like

CODES

ID  | VALUE
10  | A,B,C
11  | A,B
12  | A,B,C,D,E,F
13  | R,T,D,W,W,W,W,W,S,S

The job is to c开发者_运维问答onvert is where each token from VALUE will generate new row. Example:

CODES_TRANS

ID  | VALUE_TRANS
10  | A
10  | B
10  | C
11  | A
11  | B

What is the best way to do it in PL/SQL without usage of custom pl/sql packages, ideally with pure SQL?

Obvious solution is to implement it via cursors. Any ideas?


Another alternative is to use the model clause:

SQL> select id
  2       , value
  3    from codes
  4   model
  5         return updated rows
  6         partition by (id)
  7         dimension by (-1 i)
  8         measures (value)
  9         ( value[for i from 0 to length(value[-1])-length(replace(value[-1],',')) increment 1]
 10           = regexp_substr(value[-1],'[^,]+',1,cv(i)+1)
 11         )
 12   order by id
 13       , i
 14  /

        ID VALUE
---------- -------------------
        10 A
        10 B
        10 C
        11 A
        11 B
        12 A
        12 B
        12 C
        12 D
        12 E
        12 F
        13 R
        13 T
        13 D
        13 W
        13 W
        13 W
        13 W
        13 W
        13 S
        13 S

21 rows selected.

I have written up to 6 alternatives for this type of query in this blogpost: http://rwijk.blogspot.com/2007/11/interval-based-row-generation.html

Regards, Rob.


I have a pure SQL solution for you.

I adapted a trick I found on an old Ask Tom site, posted by Mihail Bratu. My adaptation uses regex to tokenise the VALUE column, so it requires 10g or higher.

The test data.

SQL> select * from t34
  2  /

        ID VALUE
---------- -------------------------
        10 A,B,C
        11 A,B
        12 A,B,C,D,E,F
        13 R,T,D,W1,W2,W3,W4,W5,S,S

SQL>

The query:

SQL> select   t34.id
  2            , t.column_value value
  3  from t34
  4       , table(cast(multiset(
  5              select regexp_substr (t34.value, '[^(,)]+', 1, level)
  6              from dual
  7              connect by level <= length(value)
  8         ) as sys.dbms_debug_vc2coll )) t
  9  where t.column_value != ','
 10  /

        ID VALUE
---------- -------------------------
        10 A
        10 B
        10 C
        11 A
        11 B
        12 A
        12 B
        12 C
        12 D
        12 E
        12 F
        13 R
        13 T
        13 D
        13 W1
        13 W2
        13 W3
        13 W4
        13 W5
        13 S
        13 S

21 rows selected.

SQL> 


Based on Celko's book, here is what I found and it's working well!

  SELECT 
    TABLE1.ID
    , MAX(SEQ1.SEQ) AS START_POS
    , SEQ2.SEQ AS END_POS
    , COUNT(SEQ2.SEQ) AS PLACE
  FROM 
    TABLE1, V_SEQ SEQ1, V_SEQ SEQ2
  WHERE 
    SUBSTR(',' || TABLE1.VALUE || ',', SEQ1.SEQ, 1) = ','
    AND SUBSTR(',' || TABLE1.VALUE || ',', SEQ2.SEQ, 1) = ','
    AND SEQ1.SEQ < SEQ2.SEQ
    AND SEQ2.SEQ <= LENGTH(TABLE1.VALUE)
  GROUP BY TABLE1.ID, TABLE1.VALUE, SEQ2.SEQ

Where V_SEQ is a static table with one field:

SEQ, integer values 1 through N, where N >= MAX_LENGTH(VALUE).

This is based on the fact the the VALUE is wrapped by ',' on both ends, like this:

,A,B,C,D,

If your tokens are fixed length (like in my case) I simply used PLACE field to calculate the actual string. If variable length, use start_pos and end_pos

So, in my case, tokens are 2 char long, so the final SQL is:

SELECT 
    TABLE1.ID
    , SUBSTR(TABLE1.VALUE, T_SUB.PLACE * 3 - 2 , 2 ) AS SINGLE_VAL
FROM
(
  SELECT 
    TABLE1.ID
    , MAX(SEQ1.SEQ) AS START_POS
    , SEQ2.SEQ AS END_POS
    , COUNT(SEQ2.SEQ) AS PLACE
  FROM 
    TABLE1, V_SEQ SEQ1, V_SEQ SEQ2
  WHERE 
    SUBSTR(',' || TABLE1.VALUE || ',', SEQ1.SEQ, 1) = ','
    AND SUBSTR(',' || TABLE1.VALUE || ',', SEQ2.SEQ, 1) = ','
    AND SEQ1.SEQ < SEQ2.SEQ
    AND SEQ2.SEQ <= LENGTH(TABLE1.VALUE)
  GROUP BY TABLE1.ID, TABLE1.VALUE, SEQ2.SEQ
) T_SUB
INNER JOIN 
  TABLE1 ON TABLE1.ID = T_SUB.ID
ORDER BY TABLE1.ID, T_SUB.PLACE   


Original Answer

In SQL Server TSQL we parse strings and make a table object. Here is sample code - maybe you can translate it.

http://rbgupta.blogspot.com/2007/10/tsql-parsing-delimited-string-into.html

Second Option

Count the number of commas per row. Get the Max number of commas. Let's say that in the entire table you have a row with 5 commas max. Build a SELECT with 5 substrings. This will make it a set based operation and should be much faster than a rbar.

0

精彩评论

暂无评论...
验证码 换一张
取 消